Privacy advocates reject Europe's 'code of conduct' for online speech

Author: 
Coverage Type: 

In an effort to blunt the spread of racist and extremist content on the web, European Union states along with Google, Twitter, Facebook, and Microsoft have agreed on a so-called "code of conduct" to review – and then delete at their discretion – suspected hate speech. But some civil liberty and Internet advocacy groups worry that anointing tech companies as guardians against offensive speech raises privacy concerns for users and concerns about the companies' overzealous enforcement of the code.

"The code requires private companies to be the educator of online speech, which shouldn’t be their role, and it’s also not necessarily the role that they want to play," says Estelle Massé, the EU policy analyst with Access Now, a nonprofit digital advocacy organization based in Brussels. The code is meant to encourage companies to become more vigilant at removing content that violates their own terms of service but that doesn't necessarily violate European law. The problem for civil liberty groups such as Access Now is that companies may monitor for and remove content merely because it’s controversial and they feel they face a liability by leaving it online, says Massé.


Privacy advocates reject Europe's 'code of conduct' for online speech