Outsourcing Trauma to Nairobi, Kenya

Watching a man getting murdered and children being sexually abused is every day for Idris (anonymized for reasons of privacy and security). For nine hours each day, Idris tries to unblinkingly watch the real-life horror movie play out on the screen before his eyes. As a Facebook content moderator, Idris must rule on reported pieces of content, taking them down or leaving them up. Those nine hours have been rather traumatic for Idris, who is denied wellness breaks and forced to land a decision in a mere span of 50 seconds. With the rise of hate speech and misinformation, individuals have severely criticized Facebook’s failure to address heinous content, putting the social media corporation on the hot seat. Few questions, however, have been raised as to what happens behind the scenes of content moderation — why the biggest social media site has seen rather slow and inefficient content moderation.

Idris is just one of the numerous workers in Nairobi, Kenya who work for an “ethical artificial intelligence (AI)” company called Sama. An outsourcing company with its headquarters in California, Sama’s noble mission of providing individuals in developing nations including Nairobi with “dignified digital work” has attracted a rather formidable clientele list: Google, Microsoft, Walmart, and Facebook, the last of which is unlisted. Facebook, unbeknownst to the public until recently, outsourced its moderation of traumatic imagery to workers in developing nations through Sama.

In Nairobi

In the Sama office of Nairobi, workers are required to sit on their chairs for hours on end and rule on content rapidly. Consecutively failing to decide on content within 50 seconds alerts the superiors. With a rigid focus on efficiency, supervisors loom over workers’ computers, creating a tense and intimidating work environment, according to The Times. Furthermore, to boost productivity, workers’ breaks are cut short, if not completely denied, with workers unable to unionize.

Since 2019, the Nairobi office block has taken charge of the bulk of reported content for Sub-Saharan African region. Despite the office’s significance to Facebook, however, workers earn a mere 1.50 United States dollars (USD) per hour, in contrast to the 16.50 to 18.00 USD per hour content moderators make in the U.S. The disparity becomes all the more astonishing when comparing the average wage of a content moderator in Africa to that of a worker in Facebook’s headquarters — 5,000 USD versus 240,000 USD. In protest of these conditions, there have been numerous strikes. According to The Times, in 2019, content moderators demanded a seat at the table to negotiate better pay and working conditions. To which the company responded by firing the leader of the strike and all other participants, which effectively quelled the uprising.

Facebook Content Moderation
Facebook Content Moderation

 

Why Outsource Content Moderation?

Hiring content moderators in the U.S. is expensive, as compared to the cheap labor on the African continent. In this way, the social media giant has outsourced a huge volume of traumatic material to the developing world for the sake of their bottom line. However, the cost of doing so may be even more steep than Facebook has anticipated. Though Facebook’s rigorous focus on sifting through reported content as rapidly as possible is understandable — given the huge online backlash when the corporation fails to take down incendiary material — when content moderators are required to land a swift decision on reported content that may itself be hours long, mistakes are bound to happen.

Indeed, moderators often mistakenly remove content that educates against hatred, for example, instead of promoting it. In the case of YouTube, the corporation took down a montage of the U.S. Army destroying Nazi emblems under its hate speech policy and temporarily deplatformed Scott Allsop, a history teacher who uploaded Nazi archival footage for his students, according to Susan Benesch (free speech advocate and director of the Dangerous Speech Project). Moreover, with the shortage of content moderators and the huge volume of reported pieces of content, takedowns are rather slow. According to The Times, the Network Against Hate Speech reported more than 70 pieces of grotesque, hateful content just within the past year. However, the time it took for Facebook to respond was incredibly slow — at least three months.

The Future of Content Moderation

The issues with Facebook and other social media corporations as it stands are numerous: the precise standards for content moderation are vague and untransparent and content moderation is slow and ineffective, mistakenly targeting permissible content. Given these issues, what corporations like Facebook can do, then, is consider binding themselves to voluntary agreements like the International Covenant on Civil and Political Rights (ICCPR).

Though the ICCPR was created for nations, it can be interpreted widely to include transnational businesses. In fact, the ICCPR is flexible — instead of detailing the types of content to be restricted, it lays out the terms for which restrictions may be imposed and requires a transparent process of moderation — that it be provided by law, necessary, and legitimate. One way to interpret Article 19’s clause that restrictions be “provided by law,” for example, is to merit a norm (a Community Guideline) that is precise and promulgated as a law. This is so that individuals can question when content is wrongfully taken down and also demand more accountability for reported content.

Certainly, the ICCPR will not be enough to handle all content moderation: content moderators are necessary in the first place, with everyday Facebook users acting as content checkers through daily use of the platform. Moreover, AI is definitely not sophisticated enough yet to sift through reported pieces of content. Though the nature of the work is inevitably grotesque and traumatizing, what the social media corporations can do is offer post-traumatic stress disorder (PTSD) therapy for individuals, better pay, and working conditions. Especially for workers in developing nations, Facebook should ensure fair pay rates to its workers, offering them a pay rate similar to that of workers in the U.S., given the traumatic nature of the job.

In the future, social media corporations will have to devise new ways to moderate content online without moderators in the future. However, until then, while human moderators must work on the job, companies must give their workers fair compensation and suitable working conditions, with safeguards for when these content moderators become traumatized by the images they see. It is only just to do so, considering that there must be individuals who watch grotesque reported content before the everyday user does so.

저작권자 © The Granite Tower 무단전재 및 재배포 금지