Facebook Makes its Community Standards Worse for Speech, Again

August 12, 2021   •  By Alec Greven   •    •  

Facebook is the largest social media platform in the world and host to a breathtaking amount of political speech. The company markets itself as a fierce defender of democracy, free expression, and human rights. However, Facebook recently made further revisions to its Community Standards that threaten a sweeping amount of political speech across the ideological spectrum. These new standards are problematic, excessively broad, and could exclude a massive amount of political speech if they are enforced literally.

Facebook’s Community Standards prohibit “hate speech,” which the platform defines “as a direct attack against people – rather than concepts or institutions – on the basis of what [they] call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease.” In a troubling new turn, Facebook’s recently revised Community Standards now also prohibit “[c]ontent attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination against the people associated with that protected characteristic.” This revision goes beyond attacks on people to include the much broader category of attacks on institutions or concepts associated with people’s protected characteristics. Facebook warns users, “Do not post” the kind of material described by the revised standards, but also notes that such posts “require additional information and/or context to enforce.”

Unfortunately for Facebook users, the revised Community Standards do not offer enough guidance to clarify what additional information or context is required to prevent a post that violates these standards from being removed.

The range of speech that could be considered “attacks” on “concepts, ideas, practices, or beliefs” with respect to every single protected characteristic would ensnare a tremendous amount of political speech that does not promote violence or even express hate. Further, Facebook’s limited information about the contextual information they take into account is inadequate for users who are making posts in a complex and constantly changing world. A recent case decided by the Oversight Board that resulted in a ruling against Facebook for improperly removing political speech critical of the Chinese government is just one real-world example of this problem.

For instance, what exactly constitutes prohibited “intimidation or discrimination”? Is Scientology a protected religious affiliation? Does Facebook consider obesity a “serious disease”? The current Community Standards fail to answer these questions and leave users in the dark.

Taken literally, Facebook’s new standards would appear to prohibit a call to boycott China in response to the country’s abusive treatment of Uyghurs or arguments to refuse donations to the Catholic Church for its attitudes towards LGBT people. These calls to action could be interpreted to ‘attack’ practices (China’s arbitrary detention of Uyghurs, the Catholic Church’s advocacy surrounding sexuality) that are likely to contribute to discrimination against a protected characteristic, due to either their ethnicity or religious affiliation.

Facebook has given itself wiggle room to avoid these outcomes by saying it will not apply the standards literally and will consider additional context prior to enforcement. The company explains that the type of context they consider can include, but is not limited to, “content that could incite imminent violence or intimidation; whether there is a period of heightened tension such as an election or ongoing conflict; and whether there is a recent history of violence against the targeted protected group. In some cases, [Facebook] may also consider whether the speaker is a public figure or occupies a position of authority.” Users are wronged when there is not enough information to determine which posts will be acted on by Facebook, and by that standard, this context is no help at all. Prior to posting, a user would not have a clear idea which of the posts above pass Facebook’s murky context test or what circumstances a user would have to be in to avoid having their posts removed.

To illustrate this complexity, imagine a Facebook user wants to post a statement that condemns the Chinese government (an institution) for repression of the Uyghur people and calls for Americans to stop traveling to China (an act of discrimination that will likely negatively impact people of Chinese ethnic descent) in protest. Let’s say that the user is not a public figure, the post is made in the week leading up to the United States presidential election, and that Asian Americans have recently been subject to increased acts of violence and discrimination by Americans who blame China for the spread of COVID-19. All these details are contextual factors that Facebook acknowledges it takes into consideration when deciding to act. What do these contextual factors tell the user about whether or not their post is allowed? Very little. The post would likely violate Facebook’s literal guidelines, and the contextual factors surrounding the post are so complex – and Facebook’s guidelines are so vague – that it is impossible to predict whether or not such a post would be allowed. Admittedly, setting standards for content moderation is very hard, but the current guidelines do not do nearly enough to inform users of what speech is permissible or prohibited.

Another problem exists, too. If the standards are not enforced as written, then they can be enforced arbitrarily or in a biased manner, which is itself wrong. Do all violating statements posited above come down? If not, which ones get to stay and why? Selective enforcement is inevitable, and it will chill allowable speech for those users who take care to avoid sanction. It is even more troubling if these questions are answered according to the feelings of individual moderators or according to internal Facebook guidance that the public does not have access to.

Facebook might be worried that, if they post explicit and public standards about the context required to avoid having posts removed, then bad actors will game the system and tailor posts that are hateful just enough to survive Facebook’s test for removal. This is a legitimate concern. However, this approach comes with costs because hiding the context that is required harms all users, the overwhelming majority of which are not bad actors. In effect, Facebook’s approach may prevent some gaming of their system but potentially at the expense of chilling a plethora of political speech online.

A system of free expression is never without its consequences. Allowing people to speak more freely does invite speech that can be deeply offensive to others. Nonetheless, our democracy is stronger when speech is free to move us towards a better society. As Justice Louis Brandeis once said, “If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.” This sort of engagement is precisely what Facebook should be seeking on its platform, if it actually believes in its mission to “give people a voice” and “build connection and community.”

Of course, Facebook is a private company that can set its own standards and prohibit and moderate content on its platform. Nonetheless, companies can do a lot of things that are perfectly legal while undermining their own mission, democratic values, and the free exchange of ideas. Facebook’s revised Community Standards do exactly that. The company should not promote itself as a champion of democracy and free speech while failing to protect these values on its own platform. If Facebook truly cares about free speech, then it needs to take these new rules back to the drawing board or, at minimum, provide real clarity about the context in which they will apply.

Alec Greven

Share via
Copy link
Powered by Social Snap