Should We Let Internet companies define how We Express Ourselves?

Coverage Type: 

[Commentary] Google, Facebook, Twitter, and Microsoft have agreed to a “code of conduct” in European Union countries that requires the Internet giants to take down hate speech within 24 hours of posting on their platforms. It’s the latest controversial move in what has been a thorny issue for companies trying to strike a balance between freedom of expression online and curtailing abusive or violent content.

“We remain committed to letting the tweets flow,” Karen White, Twitter’s head of public policy for Europe. “However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.” Well, that’s been the trouble—there isn’t a clear line. Much of the speech protected by the US Constitution, where these companies are based, can be downright offensive. Expressions of racism, homophobia, and religious intolerance may be deplorable, but they’re not illegal in and of themselves. Platforms like Twitter and Facebook aren’t required to let stand any comments that are protected by law, of course. They can take down anything they want, and they often do, bowing to forces ranging from public opinion to government pressure to crack down on abusive content, or reverse course when their censors go too far. In Europe, protections on speech aren’t as sweeping. The European Commission defines “illegal hate speech” as “all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, colour, religion, descent or national or ethnic origin,” and directs EU member nations to come up with criminal and civil punishments accordingly. The rules agreed to were a response to the recent terror attacks in Paris and Brussels, and are explicitly meant to “counter terrorist propaganda.”


Should We Let Internet companies define how We Express Ourselves?