At the time of this writing in mid-2020, the COVID-19 pandemic has gripped the world and frustrated health experts. In the interest of “flattening the curve” of new cases, state and local government officials have implemented a variety of legal measures including stay-at-home orders,1 social distancing requirements,2 and mandates to wear masks in public.3 These legal responses to the pandemic have created both new sources of data about people and new avenues for accessing existing data that may have been difficult to access before the pandemic.
This survey will address four common scenarios related to these data streams that these policies have fostered. The impact of these data streams on people’s privacy is just starting to be understood. The term “privacy” in this survey refers to an individual’s ability to control disclosure of her personally identifiable information. This survey explores some early answers to the question of how the COVID-19 pandemic has impacted people’s privacy in the U.S. context.
During the COVID-19 pandemic millions have abruptly discontinued face-to-face activities such as schooling, employment, and social meetings and replaced them with substitutes conducted exclusively from home. This has been made possible in part because of the explosive growth in the use of video conferencing software, most notably Zoom.4 The use of Zoom has produced both new data—information about people’s use and habits during video conferences—and a new means to access an array of other data about Zoom users—information about users’ devices and the very information exchanged among users that would have otherwise been delivered in person. Access to these data streams, by both Zoom and others, has already impacted people’s privacy in at least two ways.
First, the Zoom platform has suffered hacks and data breaches. Although Zoom presented itself as a secure platform,5 Zoom’s security features came under widespread scrutiny, including from the New York State Attorney General.6 Uninvited participants have allegedly accessed numerous videoconferences, viewing these meetings7 or “Zoombombing” the proceedings by displaying graphic images through screen sharing features.8 Zoom’s alleged failure to provide a platform that protects people’s privacy has spurred the filing of several class actions based on theories of negligence, breach of implied contract, and invasion of privacy, as well as violations of the California Consumer Privacy Act and unfair competition laws.9
Second, the exponential growth in usership gave Zoom access to new, valuable data. Class sessions, professional meetings, and social gatherings that in the past would have happened in person are taking place almost exclusively on Zoom. Zoom, therefore, has impacted people’s privacy by having direct access to information that would have been incredibly challenging for any single digital platform to gather before the pandemic.10 Realizing its value, Zoom allegedly mines its expanding data trove and sells it to advertisers and intermediaries.11 Additional class action filings have focused on Zoom’s data sales, alleging negligence, breach of implied contract, unjust enrichment, and violations of state consumer protection laws and the California Consumer Privacy Act.12
Public health officials have long used contact tracing to limit the spread of communicable diseases by notifying the potentially exposed and directing appropriate resources strategically.13 Contact tracing apps seek to improve this process by combining location data with information about who has contracted the COVID-19 virus.14 The privacy implications have the potential to be significant if enough people use contact tracing apps.15
Many features of contact tracing apps can affect privacy, but three key factors stand out: which organization deploys the app, the type of location data that is collected, and how the app gathers data. Public health authorities deploying contact tracing apps must abide by laws and data management practices that aim to strike a balance between important public health interests and individual privacy.16 For employees required to use such apps as a condition of returning to work,17 some laws limit employers’ collection and use of employees’ health information.18 But employers may have fewer limitations on collecting location data.19 Other parties who have access to contact tracing data would generally have the fewest restrictions on using individuals’ health or location data.20
Relative location data systems created via Bluetooth arguably impact privacy less than a geolocation-based system.21 A relative location system only operates when devices come within range of each other while a geolocation system needs to monitor a device’s location regularly.22 A similar range of privacy consequences can be found in how the apps gather data. An app with direct user inputs and specific opt-ins impacts an individual’s privacy less than apps that automatically collect an array of data without specific user consents.23 A contact tracing app that automatically collects user data, uses a geolocation system, and shares that information with government agencies not involved in public health will have the greatest impact on individual privacy.
As of mid-2020, deployment and use rates of contact tracing apps in the United States have been relatively low.24 Accordingly, the privacy impacts of contact tracing apps are most likely to be felt among smaller groups with a higher app-usage rate, like among the employees of an organization.
In conjunction with governments promulgating public health policies,25 other institutions are taking measures to help stop the virus’ spread.26 Employers, schools, and organizations open to the public have begun reading patrons’ and employees’ temperatures and asking them questions about symptoms associated with COVID-19.27 This data can help them bar physical entry to people who exhibit relevant symptoms and may help slow the virus’ spread.28
Disease symptom data is not a new type of information for HIPAA-covered entities or public health organizations,29 but many other organizations are gathering this information for the first time. Businesses are striving to balance the potential liabilities resulting from a COVID-19 infected workforce30 with the privacy implications of gathering (and potentially sharing)31 employees’ and patrons’ health information daily.32 But limiting the spread of the virus arguably creates a condition that exempts employers from following the prohibition in the Americans with Disabilities Act on collecting employee health information.33
Nevertheless, employers must ensure that the manners of collecting, using, and sharing this health information do not discriminate against protected classes.34 Data management best practices can help strike the necessary balance between business needs and privacy. By focusing on data security measures like limiting who has access to employee symptom data and retaining such data for only a limited time,35 employers can potentially reduce the spread of COVID-19 in the workplace while protecting employee privacy interests. Careful handling of sensitive information will allow employers to slow the spread of the virus while at the same time complying with anti-discrimination laws and avoiding other privacy causes of action like defamation36 and data breach laws.37
Existing technologies have been redeployed in attempts to deal with the COVID-19 pandemic. This brief discussion highlights some new data sources as well as new ways of using existing data sources.
Facial recognition technologies have been seen as an alternative to high-traffic points of physical contact, such as fingerprint readers.38 There are some indications that these technologies can be used to identify those who may violate pandemic-related public health regulations,39 but the widespread use of face coverings may impact facial recognition’s efficacy.40 The use of facial recognition technologies in the COVID-19 context, therefore, has the potential to impact privacy, particularly in states that protect biometric information.41
Researchers have developed algorithms utilizing artificial intelligence systems (“AIs”) to aid in the diagnosis42 and treatment43 of COVID-19. These algorithms process vast amounts of health information. While HIPAA privacy protections generally apply to the health information used in this research, similar protections do not extend to other instances of pandemic-related data mining. For example, researchers have run social media postings through location-identifying algorithms to find instances of people breaking quarantine rules.44 Other AIs have been able to predict outbreaks of the virus by analyzing social media posts.45 Both strategies ostensibly aim to aid public health officials by using what could be considered public information. These are further examples of responses to the pandemic finding new avenues to existing data.
But because of social distancing orders, social media information has become more important in many people’s lives. To maintain their mental health under quarantine, people divulge on social media what they may have shared only in person before the pandemic.46 Accordingly, the balance between privacy and public health may need to take greater account of privacy in the wake of COVID-19.
At the time of this writing, the privacy impacts of the COVID-19 pandemic are just starting to be observed. It may take quite a while to fully grasp the extent of COVID-19’s impact on privacy in the United States. What can be said, though, is that these early responses to the pandemic have created both new sources of data about people and new avenues to existing data that may have been difficult to access before the pandemic. Future analyses of the privacy impacts of the COVID-19 pandemic could benefit by considering this distinction.