How Much Sex, Violence, and Hate Speech Is on Facebook?

Facebook issued its first report on objectionable content, showing the prevalence of graphic violence is rising, as is Facebook’s ability to detect it.
Image may contain Text
Alyssa Foote

The incidence of objectionable content on Facebook is rising, as is Facebook’s ability to identify it before users report it, according to the company’s first report on enforcing its community standards.

The report, covering October 2017 to March 2018, found that the prevalence of graphic violence and nudity and sex rose in the first three months of this year compared with the prior three months. Facebook said it could not estimate the prevalence of posts promoting terrorism or hate speech or containing spam. But it said it took action on more content for those reasons in the first quarter than in the fourth quarter of 2017; such action can include removing a post, slapping a warning label on it, or hiding the content from underage users.

The period covered by the report coincides with Facebook CEO Mark Zuckerberg’s commitment to devote more resources to safety and security, an area where Facebook says it has historically “underinvested.” Facebook says it will update the numbers every six months, allowing outsiders to measure its progress.

In some cases, like promoting terrorism, Facebook says it identified more objectionable content because its automated systems for detecting such posts improved. Alex Schultz, vice president of data analytics, said Facebook applied the tools to content that had “been around a long time.” But Schultz said the increased incidence of posts with graphic violence reflected changes in the world. “A really good hypothesis we have, although this is not certain, is what’s going on in Syria. When there is a war, more violent content is uploaded to Facebook,” he said.

Facebook says it’s now able to flag more than 95 percent of posts promoting terrorism, containing adult nudity and sexual activity, spam, or fake accounts before users report such posts; for graphic violence, that figure is more than 85 percent.

But Facebook says it has a longer way to go when it comes to flagging hate speech, where its systems identified only 38 percent of objectionable posts before users did. To illustrate the point, Schultz used himself as an example. Schultz says his Facebook profile makes it obvious that he is gay. “If I use the f-word, that’s possibly OK, but if someone else uses it towards me, it’s not OK,” Schultz said. “But for everyone, you can’t necessarily tell [sexual orientation], and so it becomes quite hard.”

In all, Facebook says it took action on more than 860 million pieces of content in the first quarter. The vast majority of those---837 million---involved spam. Facebook says it also disabled 583 million fake accounts in the first quarter, down 16 percent from the previous quarter. Facebook estimates that 3 to 4 percent of its 2.2 billion monthly active users are fake accounts.

Tuesday’s enforcement report follows a related first for Facebook. In April, the company disclosed for the first time its internal guidelines for what users can’t post. In both cases Facebook’s disclosures are voluntarily and represent a big stride from the secrecy and confusion around moderation that has plagued Facebook users for years.

The enforcement numbers underscore both the unprecedented scale of Facebook’s moderation challenges and the limits on the level of transparency consumers can expect from Facebook.

For instance, a big chunk of the 86-page report dissects Facebook’s methodology for measuring the prevalence. Facebook focused on the number of times a piece of objectionable content had been seen compared with the total content a user has seen. To determine prevalence, Schultz said, “We try to take a sample of everything that is seen on Facebook and look at it with human reviewers, and they tag: [Is this] violating or is this not?” Executives said the methodology could change in future reports.

However, Facebook declined to discuss where these reviewers are based or how they are trained. The company says it has 10,000 people working on these issues and plans to get to 20,000 by the end of 2018; some of those will be contractors, the company says.

The report comes as Facebook faces increasing pressure to explain the prevalence of troubling content. In the past few weeks alone, news reports have raised issues around Facebook’s failure to police posts inciting violence against Rohingya Muslims in Myanmar, easily searchable recruitment videos for terrorist groups, Mark Zuckerberg impersonators trying to swindle users out of their money, and posts where stolen identities and Social Security numbers were exposed for years.

Guy Rosen, vice president of product management, said Facebook is trying to improve its responsiveness in Myanmar, a subject that members of Congress raised in recent hearings with Zuckerberg. “It’s an incredibly difficult situation there for the people, and we don’t want Facebook to be used as a place that may trigger violence,” Rosen said. He says Facebook is increasing its number of Burmese-speaking reviewers, responding to complaints by civil society groups who said their pleas to Facebook went ignored. Facebook is also working on improvements to reporting objectionable content in Facebook Messenger, “which was a vehicle for some of the stuff in Myanmar,” Rosen said.

Facebook is strengthening ties with civil society groups in Myanmar, to better understand the context for the conflict, Rosen said. “If something is actually a dog whistle that is actually inspiring violence, and it’s not quite clear how to understand that, it’s really helpful to be able to work with these partners who can help find things for us,” he said. “We’re improving how we work with them so that they can get stuff to us in a more reliable way.”

Blake Reid, an associate clinical professor at Colorado Law and director of the school’s technology and policy clinic, says Facebook’s scale has distorted the idea of community. “Facebook has grown to a size and scale that significant harms are in the offing to some proportion of its users no matter what approach it takes to moderating content. Every tweak it makes has the ability to influence elections, spread propaganda, effectively suppress expression, or cause other effects of similar magnitude,” Reid says. “So the specifics of how Facebook moderates content are less important from my view than how governments across the world choose to check Facebook’s size and power.”

Absent government action, Reid says, Facebook will set the rules for a significant portion of the world’s population. “This is ordinarily the province of nation states and their associated democratic institutions,” he says. “The notion that we would hand the power to do that over to a company that doesn’t even face significant competitive constraints, much less democratic accountability to its users, is super troubling.”

Transparency reports help users understand a company’s internal process but don’t necessarily push the company to operate differently. “In some ways, transparency reports are a safety valve that ease off pressure for more substantive change,” says Tarleton Gillespie, a principal researcher at Microsoft Research New England, adjunct associate professor at Cornell University, and author of the upcoming book Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media.

Gillespie says pressure from activists, researchers, and lawmakers has prompted tech companies to expand disclosure beyond reports about government requests to remove content. “But a regulatory obligation on what to report, imposed across the industry, would be much stronger,” Tarleton says.

More Great WIRED Stories