How Facebook unevenly silences posts about discrimination, censoring important conversations, while often allowing racist content to remain
In making decisions about the limits of free speech, Facebook often fails the racial, religious and sexual minorities CEO Mark Zuckerberg says he wants to protect. The 13-year-old social network is wrestling with the hardest questions it has ever faced as the de facto arbiter of speech for the third of the world’s population that now logs on each month. In February, amid mounting concerns over Facebook’s role in the spread of violent live videos and fake news, Zuckerberg said the platform had a responsibility to “mitigate the bad” effects of the service in a more dangerous and divisive political era. In June, he officially changed Facebook’s mission from connecting the world to community-building. The company says it now deletes about 288,000 hate-speech posts a month. But activists say that Facebook’s censorship standards are so unclear and biased that it is impossible to know what one can or cannot say.
The result: Minority groups say they are disproportionately censored when they use the social-media platform to call out racism or start dialogues. “Facebook is regulating more human speech than any government does now or ever has,” said Susan Benesch, director of the Dangerous Speech Project, a nonprofit group that researches the intersection of harmful online content and free speech. “They are like a de facto body of law, yet that law is a secret.”