Ellen Goodman
NTIA Artificial Intelligence Accountability Policy Report
Alongside their transformative potential for good, artificial intelligence (AI) systems also pose risks of harm. These risks include inaccurate or false outputs; unlawful discriminatory algorithmic decision making; destruction of jobs and the dignity of work; and compromised privacy, safety, and security.
To Fight Online Disinformation, Reinvigorate Media Policy
While social media companies and digital networks are relatively new, the problems of information laundering and manipulation are not. Of course, verbatim application of 20th-century media policy won’t work for today’s digital environment; some of it didn’t work very well last century either. But its core concerns should be taken seriously and its principles—especially transparency, responsibility and structural design to promote news investment—can be adapted for the 21st century.
Facebook and Google: most powerful and secretive empires we've ever known
[Commentary] Google and Facebook have conveyed nearly all of us to this page, and just about every other idea or expression we’ll encounter today. Yet we don’t know how to talk about these companies, nor digest their sheer power. We call them platforms, networks or gatekeepers. But these labels hardly fit. The appropriate metaphor eludes us; even if we describe them as vast empires, they are unlike any we’ve ever known. Far from being discrete points of departure, merely supporting the action or minding the gates, they have become something much more significant. They have become the medium through which we experience and understand the world.
Facebook is not merely a “network” for connection, like the old phone network or electrical grid, as if it had no agency, and did not take a piece of every last interaction (or false start) between friends. When and how much we interact, we rely on Facebook to say. These are not mere “edge providers”, peripheral to infrastructure, or mere “applications” that we can select or refuse. The metaphors that we use – empire, medium, undertow – allude to the power of the all-knowing digital companies. Speaking clearly about this power and its effects is critical. Ultimately, the public needs more voice, more choice, more power. In the near term, we should pursue algorithmic accountability, independent auditing and consumer protection scrutiny, before we lose our agency as a public that is something other than their “user base.”
[Ellen P. Goodman is a professor of law at Rutgers University and co-directs the Rutgers Institute for Information Policy & Law, RIIPL. Julia Powles is a legal academic working on technology law and policy at the University of Cambridge]