Facebook’s Crisis Management Algorithm Runs on Outrage

Source: 
Author: 
Coverage Type: 

It’s been almost exactly a year since news broke that Facebook had allowed the personal data of tens of millions of users to be shared with Cambridge Analytica, a consulting company affiliated with Donald Trump’s 2016 presidential campaign. That revelation sparked an investigation by the Justice Department into the company's data-sharing practices, which has broadened to include a grand jury. Privacy breaches are hardly as serious as ethnic violence, but the ordeal did mark a palpable shift in public awareness about Facebook’s immense influence. Plus, it followed a familiar pattern: Facebook knew about the slip-up, ignored it for years, and, when exposed, tried to downplay it with a handy phrase that Chief Executive Officer Mark Zuckerberg repeated ad nauseam in his April congressional hearings: “We are taking a broader view of our responsibility.” He struck a similar note with a 3,000-word blog post in early March that promised the company would focus on private communications, attempting to solve Facebook’s trust problem while acknowledging that the company’s apps still contain “terrible things like child exploitation, terrorism, and extortion.” If Facebook wants to stop those things, it will have to get a better handle on its 2.7 billion users, whose content powers its wildly profitable advertising engine.


Facebook’s Crisis Management Algorithm Runs on Outrage