The People Behind Your Safe Feed

After playing the Amazon race, which highlighted Amazon’s poor employee code of conduct and ethics, I was inspired to write my blog about a similar case with Facebook…All this talk about algorithms often makes you assume that there are codes and machines behind every feed on social media. However, a VICE video I stumbled upon this summer argues otherwise. Before proceeding further with this blog, I would like to give you a TW as this blog will touch upon death, violence, assault, and mental health. If you feel uncomfortable reading further, feel free to skip this one.

Screenshot from “The Horrors of Being a Facebook Moderator”, YouTube, uploaded on July 21, 2021, VICE, https://www.youtube.com/watch?v=cHGbWn6iwHw.

In their video, VICE invite a former content moderator that worked for Facebook, who exposed Facebook and revealed the truth behind people’s Facebook feeds. This anonymous person claims to have seen “dead bodies and murders” or “dogs being barbecued alive” while sifting through a myriad of reported content. They further confessed that working as a Facebook content moderator had affected their personal life. The content “personally touched” them and their co-workers to the extent they had to go through therapy at their own cost, as Facebook provided zero-to-no professional help to cope with the job’s consequences.

According to the interview, thousands of moderators sat eight-hour shifts focusing on disturbing content and often felt guilt and social pressure to act on some of it. For instance, the interviewee brings up an incident where their co-worker encountered a life-threatening situation and felt responsible for saving a life. Furthermore, as claimed by the interviewee, Facebook has “denied that anybody could ever get PTSD” and that Mark Zuckerberg was not aware of the extent of harm this job does to its workers.

Upon further research, it became apparent that this form of conduct has been continuous for years. According to an article from Business Insider, Facebook content moderation has been reported to cause “depression, anxiety and other negative mental health effects” since 2017 (and perhaps before then). A possible reason for the minimal acknowledgement and action to the destructive job is that the firm that is “charged with reviewing toxic material on Facebook” heavily depends on the “500 million dollar worth contract” with their “diamond client”- Facebook.

Do you mean to say Facebook did nothing to help?

Actually, it appears that Facebook did in fact provide “monetary compensation to [a] content moderator who filed a lawsuit against” them, and did imply utilising more “technology to limit worker’s exposure to graphic content”. Moreover, upon their new rebrand (Meta), Facebook has also highlighted their intentions of creating an increasingly safer space on their multitude of platforms by stating they have “40,000 people working on safety and security” and “detecting the majority of the content…before anyone reports it”. But again, as mentioned in the interview from YouTube, this does not dismiss the company’s secrecy behind the harmful effects of content moderation.

“You’re desensitized from what you’re seeing”

Interviewee, VICE.

I think that this notion of online desensitisation is quite widespread today, and is especially applicable to a lot of young people on social media; as mentioned in one of our previous workgroups, when you create an account on social media nowadays, you inherently have to ‘subscribe’ to being exposed to potentially harmful content (personal or second hand). It’s essentially unavoidable and is arguably why a lot more young people are acting upon current social issues. Because of their increased exposure to sensitive online content -such as violence and injustice- at a younger age, young people feel greater pressure to become involved in change.

Perhaps one takeaway from this blog is that social media is not entirely run by algorithms or codes and that platforms like Facebook continue relying on people to filter disturbing content for increased user engagement. And perhaps what is more troublesome is that those people who curate the online content, to make it a safe space for millions of users, do not receive enough recognition or compensation for the damage caused by their work. Lastly, and similarly to Amazon, it is a job that relies on the number of workers and not their skills per se, meaning people who are morally and physically unprepared for this job are also highered. This further makes this moderation problematic as it increases the potential of human error in the moderation process- not to mention increased health damage to the employees.

How would you suggest Facebook should cope with content moderation? Do you think AI should be assigned such jobs? Could AI learn to understand what harm or violence would look like, or what ‘morality’ means? Let me know in the comments!

The following are articles that I’ve cited in this blog, alongside the video that inspired me to write this blog:

Canales, Katie. “Facebook’s largest content moderator has reportedly struggled with the ethics of its work for the company, which requires contractors to sift through violent, graphic content.” Business Insider, August 31, 2021. https://www.businessinsider.com/facebook-content-moderation-accenture-questioned-ethics-2021-8?international=true&r=US&IR=T.

Steinhorst, Curt. “An Ethics Perspective on Facebook.” Forbes, October 22, 2021. https://www.forbes.com/sites/curtsteinhorst/2021/10/22/an-ethics-perspective-on-facebook/.

Meta. “Promoting Safety and Expression.” Accessed November 21, 2021. https://about.facebook.com/actions/promoting-safety-and-expression/.

YouTube. “The Horrors of Being a Facebook Moderator.” Uploaded to VICE channel. July 21, 2021. https://www.youtube.com/watch?v=cHGbWn6iwHw.