Managing health privacy and bias in COVID-19 public surveillance

On April 10, Apple and Google announced their response to the call for digital contact tracing, which would involve subscribers voluntarily downloading an app. While it is seemingly clear that widespread contact tracing and surveillance can help identify coronavirus cases and possible hot spots for new and recurring infections, several questions remain. The first one is related to the security and anonymity of one’s personal data. More discussion is needed on just how anonymous the data is and whether it can be easily de-anonymized, which may discourage individuals from downloading the app. The platform also needs to ensure that the collected location data won’t engender inferences about the infected person and his or her environment, i.e., the use of one’s location as an indicator of neighborhood quality.

Second, who has access to the data also matters. While both companies have made assurances around their handling of collected data and the intent to stop tracking once the pandemic has ended, what expectations have federal and local public health authorities shared around their data collection and use? How long will the data be retained, and the longer that it is kept, what is the risk of this data being used for other purposes? In the absence of current federal privacy legislation, these are all important considerations. Full encryption and crytography of collected health information for those who are infected and the people with potential exposure must be the standard. Without the possibility of an enticing “back door” into the app, individuals that opt in to use the service will be better served and protected from potential misuse by government and other companies.


Managing health privacy and bias in COVID-19 public surveillance