YouTube Hate Speech Policy Rollout Backfires Predictably

June 14, 2019   •  By Luke Wachob   •    •  

The latest attempt by a major social media platform to implement sweeping new content rules met predictable backlash, as YouTube’s effort to remove “hate speech” and white supremacist content swept up numerous historical videos and clips from educators. The episode was another reminder of the difficulty of policing user-generated speech at scale on Internet platforms.

YouTube announced its policy change in a June 5 blog post:

“Today, we’re taking another step in our hate speech policy by specifically prohibiting videos alleging that a group is superior in order to justify discrimination, segregation or exclusion based on qualities like age, gender, race, caste, religion, sexual orientation or veteran status. This would include, for example, videos that promote or glorify Nazi ideology, which is inherently discriminatory. Finally, we will remove content denying that well-documented violent events, like the Holocaust or the shooting at Sandy Hook Elementary, took place.”

Upon rollout, the policy immediately resulted in videos from journalists and other educational content being removed from YouTube. This came despite YouTube’s claim that it would be sensitive to videos featuring discussion, education, and analysis rather than advocacy of the ideologies it aims to remove:

“We recognize some of this content has value to researchers and NGOs looking to understand hate in order to combat it, and we are exploring options to make it available to them in the future. And as always, context matters, so some videos could remain up because they discuss topics like pending legislation, aim to condemn or expose hate, or provide analysis of current events. We will begin enforcing this updated policy today; however, it will take time for our systems to fully ramp up and we’ll be gradually expanding coverage over the next several months.”

Despite that note of caution, the policy was overbroad right out of the gate. The logical conclusion is that YouTube was well aware of the risks of the policy it was pursuing and took measures to mitigate those risks – and those measures failed. While some wrongfully removed videos from prominent accounts have since been reinstated, smaller accounts may not be able to harness public outrage to reverse a bad moderation decision. The result is a platform where groups with clout receive systemically better treatment than the average joe.

As I’ve noted before, it’s virtually impossible for social media platforms to craft or implement content moderation practices that will satisfy everyone. The range of opinions about where to draw the line between acceptable and unacceptable speech is enormous. Amidst public pressure, companies struggle to articulate (let alone practice) consistent standards. Yet changing rules frequently in response to the latest controversy is unlikely to result in a workable and cohesive policy overall. In trying to respond to backlash, social media companies create more backlash.

Facebook CEO Mark Zuckerberg and others appear to want the government to save them the trouble by taking the responsibility for content moderation off their hands. Some politicians are eager to take on the task, whether they understand the basics of the Internet or not. Yet even a competent government would be constrained by the First Amendment in its ability to moderate content. Unlike private actors, government generally cannot discriminate on the basis of viewpoint.

Even as private companies fail to satisfy just about anyone when it comes to moderating content, they remain the only entities with the flexibility to eventually find a better way. Companies are also more likely to adapt to changing technologies than the notoriously sluggish government. Legislative proposals to regulate online speech, such as the Honest Ads Act (a key provision of H.R. 1), threaten to stifle the constant innovation that has made the Internet so useful for speakers.

Legislators and regulators cannot foresee the ways Americans will wish to speak and communicate in the future and undervalue the benefits of free speech. The result is a tendency towards strict regulation that constrains future technology and insulates major firms from competition. We should remember that companies often welcome regulation not out of the goodness of their hearts, but out of the belief that it will benefit them.

Larger companies are better positioned to absorb the costs of government regulation than their competitors. Facebook employs tens of thousands of people to ensure security and safety on its platform. YouTube is owned by Google, one of the world’s largest companies, and Twitter is a giant in its own right. Too few policymakers consider the ramifications for start-ups and smaller competitors when designing the rules all platforms must follow.

Internet speech laws passed in Maryland and Washington were so restrictive and unclear that even Google temporarily stopped selling state political ads in both places. If the actions of private companies strike you as frustratingly inept, just imagine what a circus it would be to have state governments or Congress take the wheel. Or work hand-in-glove with Big Tech. Policymakers can’t even make workable rules for ad disclaimers – what makes anyone think they can regulate content effectively?

All of this is to say that even as social media companies struggle, their moving closer to government is a worrying sight. A better path for everyone would be for YouTube and others to just make a decision about what rules they support, state them clearly, and enforce them consistently. Then let speakers and audiences react as they may.

Luke Wachob

Share via
Copy link
Powered by Social Snap