It has been a tough start to the month of Pride for YouTube. Carlos Maza, a Vox reporter, has gone public about being harassed by comedian Steven Crowder because of his sexual orientation.
YouTube has dithered about what action to take. First it said that Crowder’s comments did not fall foul of its community guidelines, despite others pointing out that – by the letter of the law – it did. One of YouTube’s biggest queer creators, Tyler Oakley, lambasted the platform for its inaction.
Then with public criticism mounting, the Google-owned company decided it should do something. A u-turn was made. It decided to demonetise Crowder’s channel.
But before it did so, the video website laid out its plan to tackle its problems with hate speech and conspiracy theories.
On June 5, YouTube published a blog post announcing that it was extending its hate speech policy “specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status.”
At the same time, it said it would remove content that denied violent events like the Holocaust or Sandy Hook shooting happened – key conspiracy theories that have enjoyed wide currency on the platform. Thousands of accounts would be removed, YouTube said.
The decision is an extension of action taken in 2017 to limit the amount of time supremacist content was surfaced through the site’s recommendation algorithms, and a similar move, in January 2019, to throttle the number of conspiracy theory videos the algorithm recommended. Both actions reduced the number of views to such videos by an average of 80 per cent, the site says.
Chasing down and removing conspiracy theory videos might work – but YouTube’s new hate speech policy seems unlikely to really move anything. The messy way it handled Crowder shows it doesn't have an easy fix.
First: YouTube said that Crowder’s videos about Maza do not fall under the revised policies. It claims that the main point of these videos was not to harass, threaten, or incite hate, but rather to respond to opinions Maza, a video journalist, aired in his own videos.
Partly it came to its initial decision because in no video does Crowder tell his viewers to harass Maza. “YouTube's policies, as they are currently written, are strategically vague in a way that gives them a lot of leeway in situations like that,” explains Becca Lewis of Stanford University, who has researched the alt-right’s use of YouTube. “Those vagaries, in turn, get strategically exploited by those spreading harassment and hate speech.”
However, countless users pointed out that YouTube’s own defence of its decision not to ban Crowder (claiming it “found language that was clearly hurtful”) actually confirmed that he should fall foul of the old guidelines (which prohibit “content that makes hurtful and negative personal comments”). “YouTube seems hesitant to take action against conservative creators, for fear of seeming biased,” Lewis said before the change in YouTube position. “This is the case even when the conservative creators have a history of violating their terms of service.”
In an updated statement, YouTube tweeted it had decided to demonitise Crowder for the wider impact it was having. It said: "We came to this decision because a pattern of egregious actions has harmed the broader community". It then clarified its position by saying monetisation would be turned-on again if Crowder removed a link to t-shirts he was selling on an external website.
Money is another, obvious, factor of why the decision may not have been taken quickly: “YouTube's priorities are ultimately driven by profit,” Lewis says. “So even though they strategically champion themselves as promoters of equality and justice, those values are ultimately secondary to them.”
But it also comes down to an inherent dislike of taking a stance. “They just don’t want to be in the business of making the call of what is hate speech and what is not; what is satire and what crosses the line into racism,” says Ashkan Karbasfrooshan, CEO of WatchMojo, a large YouTube channel who has become an increasingly outspoken critic of the platform. “Then it’s a slippery slope. They become in the US the FCC [the Federal Communications Commission, which oversees TV and radio broadcasts] – [deciding when] is it appropriate whether something airs or not.”
When YouTube hired its first employee dedicated exclusively to look at videos the site wanted to flag up for investigation, the small company had to draw up a list of community guidelines about what would and would not be permissible. Those guidelines evolved over time. The initial guidelines allowed users to express almost anything, so long as they didn’t rely on slurs or stereotypes.
The thinking was that people with those views existed in society, and that there was no point in pretending they didn’t exist. Sunlight was the best disinfectant.
That policy lasted around six months, replaced with a catch-all clause preventing the promotion or incitement of hate. The rationale was that, to maximise the diversity of speech on YouTube, the platform needed to remove or quarantine content that would put right-thinking people off – regardless of how it was worded.
Since then the policy has evolved, though not aggressively enough to prevent people with extreme views taking advantage of it, says one early YouTube employee. The churn of employees who had that original “free speech while maintaining a civil discourse” ideal has also changed attitudes. The company’s reluctance to be seen as too heavy-handed in moderating content, and be consequently considered a publisher, rather than a platform, has also allowed hate to thrive.
Society has also changed. We live in a world of fake news and polarised political debate. We have lived through the Black Lives Matter movement, and people distrust traditional media more than ever before. And pressure is rising – not just from advertisers, but from politicians. YouTube is waking up to its responsibility, peppering its press releases and public statements with the word.
It is careful to limit that responsibility, though. Last month the UK Commons’ select committee for Digital, Culture, Media and Sports debated with YouTube representatives whether “responsibility” also meant “liability” for the content posted, with YouTube pushing back strongly against the suggestion.
The fear of regulation is pushing the company to begrudgingly make some choices. “It’s optics,” says Karbasfrooshan. “They were thrown in the pool in the EU. The EU has been pushing them that if something is hateful or racist they have to act, or they get fines. Article 17 [of the new EU Copyright regulation] is also forcing it into no longer pretending to be a blind platform. They have to actually make this decision. They have to make that call: if something is harmful, they have to take it down.” In the first quarter of 2019, YouTube removed nearly 50,000 videos and 10,000 accounts for violating cyberbullying and harassment policies.
Despite the changes, the policy on hate speech remains vague. Crowder still remains able to post to YouTube – although he now won't make any money from it, and it's likely his videos won't be surfaced by YouTube's algorithm as much. And those watching the platform are uncertain about whether the policy changes will make any difference.
“This is a huge step in the right direction, but I have a healthy dose of scepticism until I see how these new policies actually work in practice,” says Lewis. “We’ve seen a lot of press releases from social media companies that don't lead to much follow-through.”
Circumstances and events may change things, however. “Tech companies have historically attempted to frame all of their policies in terms that still maintain a level of ‘neutrality’,” says Lewis. “But I think more and more we're seeing that this sense of neutrality is a fallacy.”
This article was originally published by WIRED UK