Artificial intelligence (“AI”) gives users of digital hand tools (e.g., cell phone, tablet, laptop computer) enhancements that bring with them novel and unresolved security vulnerabilities and risks.1
AI, used here, refers to narrow or weak AI, the creation of digital systems to do things humans use their minds to do—but do them faster, more accurately, and more consistently, while using them to generate insights and predictions beyond what humans can do.2 AI systems may be thought of metaphorically as “power tools”3 that augment human work and productivity, particularly when such work can be performed as a “prediction.” One kind of AI system is machine learning:
Machine learning . . . approaches problems as a doctor progressing through residency might: by learning rules from data. Starting with patient-level observations, algorithms sift through vast numbers of variables, looking for combinations that reliably predict outcomes. . . . [W]here machine learning shines is in handling enormous numbers of predictors—sometimes, remarkably, more predictors than observations—and combining them in nonlinear and highly interactive ways. This capacity allows us to use new kinds of data, whose sheer volume or complexity would previously have made analyzing them unimaginable.4
This essay on AI and security developments proceeds as follows. Part II addresses the Federal Trade Commission’s 2019 settlement with Facebook covering allegations that included deceptive acquisition of data for the company’s AI tools. Part III discusses the problematic use of AI to set credit limits for Apple Card applicants. Part IV introduces Illinois’ Artificial Intelligence Video Interview Act. Part V addresses Executive Order No. 13905, Strengthening National Resilience Through Responsible Use of Positioning, Navigation, and Timing Services. Part VI contains concluding observations.
In 2012, the Federal Trade Commission (“FTC”) filed a complaint alleging that, since 2009, Facebook had engaged in unfair and deceptive practices. Facebook settled the FTC’s allegations. The Commission Order (“2012 Order”) prohibited Facebook from misrepresenting “the extent to which a consumer can control the privacy of any covered information . . . and the steps a consumer must take to implement such controls” and “the extent to which [Facebook] makes or has made covered information accessible to third parties.”5
In 2019, the FTC and the U.S. Department of Justice alleged that Facebook had failed repeatedly to comply with the 2012 Order. For instance, Facebook told third-party developers that, after April 2015, it would cease sharing user data with apps that a user’s Friends used; Facebook, however, “had private arrangements with dozens of . . . ‘Whitelisted Developers,’ that allowed those developers to continue to collect” user data from apps their Friends used.”6
On July 24, 2019, Facebook settled the FTC’s charges (“2019 Settlement”).7 Facebook agreed to pay a $5 billion penalty and implement an array of privacy and security safeguards, including some specifically related to Facebook’s use of AI-augmented facial recognition. For example, Facebook “shall not create any new Facial Recognition Templates, and shall delete any existing Facial Recognition Templates,” unless Facebook discloses how it “will use, and . . . share, the Facial Recognition Template for such User, and obtains such User’s affirmative express consent.”8
In April 2020, the FTC’s Bureau of Consumer Protection posted on the FTC’s website a guidance entitled Using Artificial Intelligence and Algorithms (“Guidance”). The Guidance seeks to help companies “manage the consumer protection risks of AI and algorithms.”9 The Guidance references the 2019 Settlement to highlight the need to avoid deceptive practices when collecting sensitive data for AI:
Be transparent when collecting sensitive data. The bigger the data set, the better the algorithm, and the better the product for consumers, end of story . . . right? Not so fast. Be careful about how you get that data set. Secretly collecting audio or visual data—or any sensitive data—to feed an algorithm could also give rise to an FTC action. Just last year, the FTC alleged that Facebook misled consumers when it told them they could opt in to facial recognition—even though the setting was on by default. As the Facebook case shows, how you get the data may matter a great deal.10
AI tools may be defective, due to errors in design (so that they do not “learn” correctly from their data sets), errors contained in the data sets (embedding bias), or errors introduced into the data sets by bad actors. As a 2017 RAND study explained:
[A]n artificial agent is only as good as the data it learns from. Automated learning on inherently biased data leads to biased results. . . . Applying procedurally correct algorithms to biased data is a good way to teach artificial agents to imitate whatever bias the data contains.11
Learning algorithms tend to be vulnerable to characteristics of their training data. This is a feature of these algorithms: the ability to adapt in the face of changing input. But algorithmic adaptation in response to input data also presents an attack vector for malicious users. This data diet vulnerability in learning algorithms is a recurring theme.12
An alleged example of an unlawfully biased AI algorithm surfaced in 2019 involving the Apple Card. In August 2019, Apple, in partnership with Goldman Sachs as the issuing bank, began inviting consumers to apply for its Apple Card credit card. An Apple press release touted Apple Card’s AI advantages, but did not disclose that AI augmentation would help identify “qualified” customers and set their credit limits.13 In November 2019, Danish entrepreneur David Hansson tweeted that his wife had been denied a “credit line increase for the Apple Card,” although her credit score exceeded his14: “‘My wife and I filed joint tax returns, live in a community-property state, and have been married for a long time. Yet Apple’s black box algorithm thinks I deserve 20x the credit limit she does . . . .’”15 Apple assured Hansson the credit determination did not reflect gender discrimination, “citing the [AI] algorithm that makes Apple Card’s credit assessments.”16 Hansson’s tweets “went viral.”17 Hansson’s wife eventually received “a ‘VIP bump’ to match his [Apple Card] credit limit.”18 The AI malfunction remained unexplained.
Goldman reportedly is “responsible for all credit decisions”19 for Apple Card applicants, and Goldman “implemented” the algorithm.20 Neither Apple’s nor Goldman’s denials of discrimination nor defenses of their product explained the apparent gender-based discrepancy, the AI algorithm, its role in such decisions, or any affirmative precautions that Goldman had taken to prevent the algorithm from generating gender-biased predictions. Instead, Goldman took the position that the algorithm did not use gender as a criterion and therefore could not produce gender-biased predictions. This explanation ignored the inferential power of AI algorithms and may propagate a serious misconception—that algorithmic bias will not exist if the data that trains the algorithm does not contain or reflect bias. As a WIRED report explains:
Goldman landed on what sounded like an ironclad defense: The algorithm, it said, has been vetted for potential bias by a third party; moreover, it doesn’t even use gender as an input. How could the bank discriminate if no one ever tells it which customers are women and which are men?
This explanation is doubly misleading. For one thing, it is entirely possible for algorithms to discriminate on gender, even when they are programmed to be “blind” to that variable. For another, imposing willful blindness to something as critical as gender only makes it harder for a company to detect, prevent, and reverse bias on exactly that variable. . . .
A gender-blind algorithm could end up biased against women as long as it’s drawing on any [data] input or inputs that happen to correlate with gender.21
With no financial discriminator identified or acknowledged as the cause, gender discrimination—by humans, or embedded in the design of the AI algorithm or trained into the algorithm by flawed or “poisoned” data—appeared a possible cause, unless the Hanssons’ experience was an outlier.
It proved not to be an outlier. On November 9, 2019, Apple co-founder Steve Wozniak tweeted: “The same thing happened to us. I got 10x the credit limit. We have no separate bank or credit card accounts or any separate assets. Hard to get to a human for a correction though. It’s big tech in 2019.”22
That month, New York’s Department of Financial Services (“NYDFS”) opened an investigation into Apple Card’s issuing bank and AI algorithms used to determine credit limits.23 The NYDFS Superintendent explained the investigation would seek “to determine whether New York law was violated and ensure all consumers are treated equally regardless of sex. . . . Any algorithm that intentionally or not results in discriminatory treatment of women or any other protected class of people violates New York law.”24
It may seem startling that a creditor could be liable for unintentional discriminatory treatment resulting from its use of an AI algorithm. But strict liability, or liability without specific intent to discriminate, is the applicable standard under the Equal Credit Opportunity Act (“ECOA”). The ECOA prohibits disparate treatment “against any applicant, with respect to any aspect of a credit transaction—(1) on the basis of race, color, religion, national origin, sex or marital status, or age.”25 ECOA’s implementing regulations provide: “A creditor shall not discriminate against an applicant on a prohibited basis regarding any aspect of a credit transaction.”26 And, the Consumer Financial Protection Bureau’s official interpretation of this rule emphasizes: “Disparate treatment on a prohibited basis is illegal whether or not it results from a conscious intent to discriminate.”27
Companies increasingly use AI to review and rank job applicant resumes (at a speed and scale that human reviewers could not match) and, in some instances, use AI to “analyze applicants’ facial expressions during video job interviews.”28 Such analysis focuses on an array of facial and eye expression cues that the AI model compares to a target profile that purports to be indicative of traits the employer seeks in applicants and traits the employer does not want in applicants.
Companies that use AI as an applicant-selecting tool include Dunkin’ Donuts, IBM, Carnival Cruise Lines, the Boston Red Sox, and Unilever USA.29 Unilever’s algorithm examines videos of applicants “answering questions for around 30 minutes, and through a mixture of natural language processing and body language analysis, determines who is likely to be a good fit.”30
Unilever’s target profile of a preferred candidate’s positive traits includes systemic thinking, resilience, and business acumen.31 It is unclear if Unilever’s target profile excludes disfavored negative traits, such as a lack of candor. Unilever’s AI tool reportedly identifies the applicants that best match the target profile and “returns those to a human recruiter, along with notes from the AI about what it observed in each candidate.”32
Unilever’s AI assesses the presence or absence of such traits in an applicant’s video. It predicts an employee’s probable “success” by recognition not of a person’s identity, but of the degree to which the applicant’s facial expressive traits match “previously successful employees.”33 It is unclear whether Unilever scrutinizes its target profile for bias that may be inherent in traits of successful Unilever employees (which might include gender and racial bias). It’s risky to rely on AI’s apparent objectivity and proficiency in selecting candidates or in setting credit limits, even if it applies its rules more consistently than humans ever could. In such cases, “if a company has traditionally skewed toward (or away from) certain categories of people, the AI will learn to do the same unless the training is handled very carefully to avoid this outcome.”34
Possibly concerned by such risks, Illinois’ legislature, on May 29, 2019, unanimously passed35 the Artificial Intelligence Video Interview Act (“Act”).36 The Act, which came into effect on January 1, 2020,37 is reportedly the first state statute aimed at regulating the use of AI in the employee hiring process.38
The Act applies to an employer in Illinois who wants to consider hiring applicants for “positions based in Illinois,” who wants to ask such applicants to “record video interviews,” and who wants to use AI to analyze the “applicant-submitted video.”39 To engage in such AI-augmented hiring practices, an employer must obtain the applicant’s prior consent: “An employer may not use artificial intelligence to evaluate applicants who have not consented to the use of artificial intelligence.”40
To obtain the requisite consent, an employer must meet three conditions:
The Act requires that all three conditions be met “before the interview,” but does not specify how long before the interview those conditions must be met. Thus it is unclear whether there is a minimum period before the interview when an employer must give an applicant an explanation of “how” the AI “works.” The Act is silent on whether employers may give a desired category of applicants a written explanation far in advance of an interview, and give a less preferred category of applicants an oral explanation minutes “before the interview.” Doing so would risk impermissible bias and might deny some applicants a fair opportunity to consider the significance of the notice before consenting or refusing to consent to AI analysis of their video interview.
The Act does not define “artificial intelligence,” nor set criteria for what constitutes a sufficient explanation of “how” the AI “works,” nor explain the term “applicant-submitted video.” It would appear the Act applies to video interviews that applicants initiate, on a digital device, and then upload or “submit” to the employer.
The uncertainties of the timing for the notice, the level of explanation of “how” the AI “works,” and lack of a definition of key terms such as “artificial intelligence” set the stage for what could be an employer/applicant impasse: the applicant might object to an opaque or uninformative explanation of “how” the AI “works” and condition consent on receipt of an improved explanation; the employer might refuse to give it, decline to interview the applicant, and thereby exclude the applicant from hiring consideration. Other applicants, on learning of such results, might consent rather than risk rejection.
Thus, in practice, the Act’s required consent to AI analysis of an interview video may dwindle to a consent ritual similar to a pre-surgical requirement for an “informed consent,” characterized by being a last-minute exercise, often conducted with haste and opacity, and providing little, if any, protection of the patient. However, surgeons and anesthesiologists seek to heal, not select, a patient. Physicians answer a patient’s questions to allay fears of surgery’s uncertain outcome, and not to select which patients qualify for surgery. Employers, not bound by medical ethics, may be less patient or unwilling to answer questions about their use of AI or may tag as a negative trait or departure from the target profile an applicant’s request for an improved explanation.
It is noteworthy that the requisite explanation of “how” the AI “works,” which includes explaining “what general types of characteristics it uses to evaluate applicants,” would not appear to protect applicants against an employer’s deliberate or inadvertent use of biased or otherwise defective AI analysis of a video interview.
Perhaps most problematically, the Act omits any requirement for employers to secure access to their interview video AI tools. Such security, at a minimum, might include audits to check whether AI software, algorithm, or training data had been accessed and modified. The more successful the company and the more essential it is to U.S. critical infrastructure or national security, the greater the chances that its AI tools will be targeted by bad actors. Competitors might seek access to modify the AI training data in order to impair the company’s ability to select the most qualified candidates. Foreign adversaries might pursue access to doubly distort the data, causing AI to underrate qualified candidates and overrate candidates who might be sympathetic to, or plants of, the adversary. AI is remarkably susceptible to such hacks and corruption of its training data:
AI models can be hacked by inserting a few tactically inserted pixels (for a computer vision algorithm) or some innocuous looking typos (for a natural language processing model) into the training set. . . . Let’s say you have a model you’ve trained on data sets. Its classifying pictures of cats and dogs. . . . People have figured out ways of changing a couple of pixels in the input image, so now the network image is misled into classifying an image of a cat into the dog category. . . . The image still looks the same to our eyes. . . . But somehow it looks vastly different to the AI model itself.44
The Global Positioning System (“GPS”), a U.S.-government-owned utility, provides positioning, navigation, and timing (“PNT”) services and information to civilian and military users worldwide.45Positioning data enable one to determine accurately one’s precise location and orientation; navigation data give one the “ability to determine current and desired position . . . and apply corrections to course . . . and speed to attain a desired position anywhere around the world”; and timing data enable one to “acquire and maintain accurate and precise time . . . anywhere in the world and within user-defined timeliness parameters.”46
GPS provides three-dimensional navigational data to ships, aircraft, trains, and mobile phones. GPS also provides a fourth dimension of data crucial to the reliable operation of critical infrastructure—precise time and frequency data for synchronizing devices and systems.47 As observed in a recent Scientific American article, “[a]lthough we think of GPS as a handy tool for finding our way to restaurants and meetups, the satellite constellation’s timing function is now a component of every one of the 16 infrastructure sectors deemed ‘critical’ by the Department of Homeland Security.”48
Critical infrastructure’s dependence on GPS timing signals means that a disruption of GPS signals or corruption or modification of GPS timing data could de-synchronize devices and systems that cannot operate properly or safely in such a destabilized state. Because GPS signals must travel over 12,000 miles from satellites to Earth-based receivers, they are attenuated, weak, and vulnerable to being “jammed” (depriving the user of signal) or “spoofed” (when a slightly stronger signal delivers false data of the recipient’s location and time at that location).49 Experts express concern that adversaries or terrorists could launch a coordinated jamming and spoofing attack against the GPS system. Such attack could
severely degrade the functionality of the electric grid, cell-phone networks, stock markets, hospitals, airports . . . all at once, without detection. The real shocker is that U.S. rivals do not face this vulnerability. China, Russia and Iran have terrestrial backup systems that GPS users can switch to and that are much more difficult to override than the satellite-based GPS system. The U.S. has failed to achieve a 2004 presidential directive to build such a backup.50
To reduce GPS vulnerabilities and improve its resilience, the President, on February 12, 2020, issued Executive Order No. 13905, Strengthening National Resilience Through Responsible Use of Positioning, Navigation, and Timing Services (“EO”).51 The EO emphasizes that disruption of GPS-dependent PNT services—or their “manipulation”—“has the potential to adversely affect the national and economic security of the United States.”52 The EO announces a U.S. policy to “ensure that disruption or manipulation of PNT services does not undermine the reliable and efficient functioning of its critical infrastructure.”53
To implement that policy of continuity of PNT services, the EO introduces a new standard—“responsible use of PNT services,” vaguely defined as “the deliberate, risk-informed use of PNT services, including their acquisition, integration, and deployment, such that disruption or manipulation of PNT services minimally affects national security, the economy, public health, and the critical functions of the Federal Government.”54
When applied, the standard would appear to require users of PNT services to avert “disruption or manipulation”; or, failing that, they should operate their PNT devices and services at a level of resilience that limits any bad actor’s “disruption or manipulation” to a minimal effect on “national security, the economy, public health, and the critical functions of the Federal Government.”
The EO does not define crucial terms in the standard—“disruption,” “manipulation,” and “minimally affect”—nor authorize federal agencies to issue regulations to clarify them. With such terms undefined, a “user” will have difficulty navigating or positioning its activities into compliance with the standard when it emerges in agency-generated “PNT profiles” (explained below).
The EO does not define the term “user,” but it’s reasonable to infer the EO aims at enterprise, not consumer, users. Enterprise users might include, without limitation, financial institutions, telecoms (4G and 5G), mobile phone and map app makers, airlines, trains, oil and gas, and bulk power system operators.
To coax PNT service users to improve resilience, the EO requires the Secretary of Commerce (“SECCOM”), by February 12, 2021, and in coordination with the heads of Sector-Specific Agencies (“SSAs”), to “develop and make available” to an undefined set of “appropriate agencies and private sector users” what it terms “PNT profiles.”55 The EO defines “PNT profiles” to mean: “a description of the responsible use of PNT services—aligned to standards, guidelines, and sector-specific requirements—selected for a particular system to address the potential disruption or manipulation of PNT services.”56
Deconstructed, the EO seems to require the SECCOM and SSAs to develop standards of resilience that will apply to specific categories of PNT service users and will be aimed at minimizing “potential disruption or manipulation of PNT services.” The set(s) of user-specific resilience standards will be referred to as “PNT profiles.” The EO expressly assumes, without explanation, that making PNT profiles available is something the government can do better than industry, and once available, the PNT profiles will “enable” public and private PNT service users to perform three tasks toward improving PNT service resilience:
The EO does not require that PNT users meet or try to meet their respective PNT profiles. Instead, it mandates that, within ninety days of the PNT profiles’ being made available, federal government agencies, working through the Secretary of Homeland Security, “develop contractual language for inclusion of the relevant information from the PNT profiles in the requirements for Federal contracts for products, systems, and services that integrate or utilize PNT services.”58
To inform development of PNT profiles, the National Institute of Standards and Technology (“NIST”) issued, on May 27, 2020, a Request for Information (“RFI”) to PNT vendors and service users. The RFI asks respondents to identify and describe processes, procedures, approaches, or technologies to “manage cybersecurity risks to PNT services,” “detect disruption or manipulation of PNT services,” and “recover or respond to PNT disruptions.”59 NIST has made publicly available all relevant responses.60
AI tools derive their augmentation capabilities or “learn” from exposure to dynamic data sets. The fact that AI “learns” means its learning process can be subverted: it may be hacked and tampered with so that the model that emerges from what AI “learns” may fail to make accurate forecasts, or generate biased predictions, or malfunction in other ways an adversary intends.
Algorithms find things they have not been trained on, can’t recognize, and need to be dynamically retrained to identify correctly.61 AI is as dynamic and protean as the data that trains it, but AI cannot accurately predict “outside the box” of data that trains it. AI that works right today may not work right tomorrow. Security incidents may manipulate data or algorithms. AI machines may train to perform “adversarial AI” to confuse and subvert the operations of other AI machines.62 AI thus has continuous data quality challenges that necessitate checking and verifying data throughout the development process. Routine re-verifications may be viewed as “azimuth checks.” In land navigation, “azimuth” expresses direction, and each “azimuth check” re-verifies whether one’s route is on course to the destination. AI development needs its own “azimuth checks” to verify whether its developers are performing the task at hand correctly and will create a model that forecasts accurately.63 AI’s dynamic intersections with security make “azimuth checks” of data and algorithms a necessary safeguard no matter “how noble in reason! how infinite in faculty!”64 the AI may seem to be.