News & AnalysisNewsletter

Facebook’s Content Moderation Strategy is ‘Grossly Inadequate’, Finds Study

A new report looks at how social media majors like Facebook, Twitter and YouTube moderate their content and the ways the process can be improved.

The spread of misinformation, hate speech and extremist content on social media platforms has prompted heated debate over how social media companies should approach content moderation. A New York University (NYU) report highlights how the content moderation strategy of social media giants such as Facebook, Twitter, and YouTube, among others, is ‘grossly inadequate’ and looks at the ways the process can be improved. It also calls for social media companies to stop outsourcing content moderation. (Read the full report: ‘Who Moderates the Social Media Giants? A Call to End Outsourcing’)

As a strong case in point, the study used the failure of Facebook to moderate content on Present Donald Trump’s incendiary remarks about the ongoing protests against police brutality. The report also cites examples of problematic content shared on Facebook in India, especially targeting Muslims, and pointed out that the social media giant failed to take it down.

Paul M. Barrett, deputy director of the NYU Stern Center for Business and Human Rights and author of the report, told CXOToday, “Misinformation is becoming an increasingly big problem on tech platforms during the protests against racial injustice and the novel coronavirus pandemic. Content moderation would improve if it was taken in-house at the social media platforms and many more moderators were hired, instead of the outside contractors on which they currently largely depend – to decide on what posts and photos should be removed.

As we outline in our report, it would also help if the companies hired content “czars” who oversaw all aspects of moderation and fact-checking.”

Read more: Here’s A Way To Protect Yourself Against Manipulated ‘Fake News’

Focusing primarily on Facebook, Barrett said that every day, three million Facebook posts are flagged for review by 15,000 moderators. “While they have improved AI-driven automated systems and hired more content moderators, there’s still a long way to go in the enforcement of their community standards.”

The report estimates those users and the company’s artificial intelligence system flag more than 3 million items daily. With the company reporting an error rate of 10% by moderators spread across 20 sites which would mean that Facebook makes about 300,000 content moderation mistakes per day, as Barrett mentioned, “When you’re operating at the scale of a Facebook, that’s a lot of blunders!”

While Barrett highlighted primarily Facebook as a case study in content moderation that has gone wrong and partly blames the company’s ‘relentless focus on growth’ for its ‘inability to keep up with dangerous and misinformed content’, the NYU study acknowledged that all the major social media platforms are suffering from the same content moderation problem.

While Facebook has about 15,000 content moderators, most of them work for third-party vendors. That’s compared to about 10,000 moderators for YouTube and Google and 1,500 for Twitter, according to the study. And while Facebook has also partnered with 60 journalist organizations to implement fact-checking, the number of items sent to these groups far exceeds their capacity to verify most claims.

“These numbers may sound substantial, but given the daily volume of what is disseminated on these sites, they’re grossly inadequate,” he said.

Read more: How Social Media Is Changing Global Politics

Barrett further explained that though outsourcing saves the industry money, there’s a psychological factor at play. As Facebook, YouTube and Twitter outsource most of their human content moderation—the job of deciding what stays online and what gets taken down—to third-party contractors, which creates a marginalized class of reviewers who are seen as “second-class citizens.”

The NYU report finds that the peripheral status of moderators has contributed to inadequate attention being paid to incendiary content spread in developing countries, sometimes leading to violence offline. Outsourcing has also contributed to the provision of subpar mental health support for content moderators who spend their days staring at disturbing material.

Currently, many of those charged with sifting through the reams of content posted to social media platforms are contractors, without the same salaries, health benefits and other perks as full-time employees at Silicon Valley companies.

According to former moderators I interviewed, some of the larger outsourcing firms that do work for Facebook have not provided adequate mental health care for employees who are exposed to high volumes of toxic content. These former moderators also complain that the work environments are chaotic and don’t lend themselves to careful analysis of tough content calls,” said Barrett.

Barrett continued, how, in recent months, the coronavirus pandemic has exacerbated problems with content moderation. As content moderators are sheltered in place at home, without access to secure computers, social media companies announced that they would rely more on AI to remove harmful content. The result has been that the platforms have mistakenly taken down some legitimate content from healthcare professionals and others, whose posts were mistakenly flagged for removal.

He also observed that in the foreseeable future, human moderation will be a big part of the operation. Machines cannot yet evaluate content for nuance and context in the way that people can.

“The widespread practice of relying on third-party vendors for content review amounts to an outsourcing of responsibility for the safety of major social media platforms and their billions of users,” said Barrett, adding that this calls for better pay and work conditions for content moderators, who are crucial in keeping the internet a safe space.

Leave a Response

Sohini Bagchi
Sohini Bagchi is Editor at CXOToday, a published author and a storyteller. She can be reached at [email protected]