The dangers of tech-driven solutions to COVID-19
Contact tracing done wrong threatens privacy and invites mission creep into adjacent fields, including policing. Government actors might (and do) distort and corrupt public-health messaging to serve their own interests. Automated policing and content control raise the prospect of a slide into authoritarianism. But most critics have focused narrowly on classic privacy concerns about data leakage and mission creep—especially the risk of improper government access to and use of sensitive data. Apple and Google released an application programming interface (API) to enable apps for proximity tracing and exposure notification tailored to address those criticisms. But that approach fails to address more fundamental obstacles to creating a safe and sustainable system of public-health surveillance, and it also creates new obstacles.
Enshrining platforms and technology-driven “solutions” at the center of our pandemic response cedes authority to define the values at stake and deepens preexisting patterns of inequality in society. It also ignores platforms’ role in fostering and profiting from the disinformation that hobbles collective efforts to safeguard the public’s health. Effective, equitable pandemic response demands deeper, more structural reforms regulating the platforms themselves.
[Julie E. Cohen is the Mark Claster Mamolen Professor of Law and Technology at Georgetown Law. Woodrow Hartzog is Professor of Law and Computer Science at Northeastern University. Laura Moy is Associate Professor of Law and Director of the Communications and Technology Law Clinic at Georgetown Law.]
The dangers of tech-driven solutions to COVID-19