Chapter 1: Emerging Threats in the AI Era

Following is a summary of Part I, Chapter 1 of the National Security Commission on Artificial Intelligence's final report. Use the below links to access a PDF version of the Chapter, Blueprint for Action, and the Commission's Full Report.

The U.S. government is not prepared to defend the United States in the coming artificial intelligence (AI) era. AI applications are transforming existing threats, creating new classes of threats, and further emboldening state and non-state adversaries to exploit vulnerabilities in our open society.1

AI technologies exacerbate two existing national security challenges:

First, digital dependence in all walks of life increases vulnerabilities to cyber intrusion across every segment of our society: corporations, universities, government, private organizations, and the homes of individual citizens. In parallel, new sensors have flooded the modern world. The internet of things (IoT), cars, phones, homes, and social media platforms collect streams of data, which can then be fed into AI systems that can identify, target, and manipulate or coerce our citizens.2

Second, state and non-state adversaries are challenging the United States below the threshold of direct military confrontation by using cyber attacks, espionage, psychological and political warfare, and financial instruments. Adversaries do not need AI to conduct widespread cyber attacks, exfiltrate troves of sensitive data about American citizens, interfere in our elections, or bombard us with malign information on digital platforms. However, AI is starting to change these attacks in kind and in degree, creating new threats to the U.S. economy, critical infrastructure, and societal cohesion.3

The prospect of adversaries using machine learning (ML), planning, and optimization to create systems to manipulate citizens’ beliefs and behavior in undetectable ways is a gathering storm.4

Five AI-related threats already have been, or soon will be, developed and used against the United States.

AI-Enabled Information Operations

Mapping tensions and exploiting societal divisions with AI generated propaganda.

Data Harvesting and Targeting of Individuals

Harvesting personal data on U.S. citizens and using AI to target them individually.

Accelerated Cyber Attacks

More precise, tailored, fast, automated, stealthy, persistent, effective, and large-scale.

Adversarial AI

Manipulating, evading, poisoning, stealing, and biasing AI systems.

AI-Enabled Biotechnology

Strategic threats to competitiveness; operational threats from breakthroughs.

1. AI-Enabled Information Operations.

AI and associated technologies will increase the magnitude, precision, and persistence of adversarial information operations. AI exacerbates the problem of malign information in three ways:

Message: AI can produce original text-based content and manipulate images, audio, and video, including through generative adversarial network (GAN)-enabled and reinforcement learning (RL) deep fakes that will be very difficult to distinguish from authentic messages.

Audience: AI can construct profiles of individuals’ preferences, behaviors, and beliefs to target specific audiences with specific messages.

Medium: AI can be embedded within platforms, such as through ranking algorithms, to proliferate malign information.

AI-enabled malign information campaigns will not just send one powerful message to 1 million people, like 20th century propaganda. They also will send a million individualized messages—configured on the basis of a detailed understanding of the targets’ digital lives, emotional states, and social networks.5 Rival states are already using AI-powered malign information.

In the United States, the private sector has taken the leading role in combating foreign malign information. Social media companies in particular have extensive operations to track and manage information on their platforms. But coordination between the government and the social media firms remains ad hoc. We need a more integrated public-private response to the problem of foreign-generated disinformation.

The Government Should

Create a Joint Interagency Task Force and Operations Center.

Congress has authorized a Foreign Malign Influence Response Center to be established within the Office of the Director of National Intelligence (ODNI).6 The government should use this authority to create a technologically advanced, 24-hour task force and operations center to lead and integrate government efforts to counter foreign-sourced malign information.

Fund the Defense Advanced Research Projects Agency (DARPA) to coordinate multiple research programs to detect, attribute, and disrupt AI-enabled malign information campaigns and to authenticate the provenance of digital media.

Additional funding would amplify ongoing DARPA research programs to detect synthetic media and expand its efforts into attributing and disrupting malign information campaigns.7 However promising some of these detection technologies may prove to be individually, funding to develop alternative technologies to authenticate the provenance of the digital media will provide a more technologically robust means to prevent the impersonation of trusted sources of information.8 DARPA should pursue these programs and help transition all of these technologies and applications to government departments and agencies, in order to assist with detecting, attributing, and disrupting malign information campaigns in real time.

Create a task force to study the use of AI and complementary technologies, including the development and deployment of standards and technologies, for certifying content authenticity and provenance.

The White House Office of Science and Technology Policy should take the lead in creating this task force. In response to the challenges of misinformation, efforts are underway to develop standards and pipelines aimed at certifying the authenticity and provenance of audiovisual content.9 These efforts make use of technologies, including encryption and fragile watermarking, to secure and track the expected transformations of content via production and transmission pipelines.

2. Data Harvesting and Targeting of Individuals.

Data security is a national security problem. “Ad-tech” has become “natsec-tech.” Potential adversaries will recognize what every advertiser and social media company knows: AI is a powerful targeting tool. Just as AI-powered analytics transformed the relationship between companies and consumers, now it is transforming the relationship between governments and individuals. The broad circulation of personal data drives commercial innovation but also creates vulnerabilities.10

We fear that adversaries’ systematic efforts to harvest data on U.S. companies, individuals, and the government is about more than traditional espionage.11 Adversaries will combine widely available commercial data with data acquired illicitly—as in the 2015 Office of Personnel Management hack—to track, manipulate, and coerce individuals.12

For the government to treat the data of its citizens and businesses as a national security asset, substantial changes are required in the way we think about data security and in our policies and laws to strengthen it. We need to identify categories and combinations of personal and commercial data that are most sensitive.

The Government Should

Develop policies that treat data security as national security, including in these areas:

First, from a technical standpoint, the government must ensure that a security development lifecycle approach is in place for its own AI systems (including commercial systems it acquires), which should include a focus on potential privacy attacks.13 Red teaming must include privacy expertise. Government databases should be federated and anonymized whenever possible, and personal data retained no longer than is necessary, in order to make it more difficult for adversaries to utilize information for malicious purposes.

Second, the government should ensure that data privacy and security are priority considerations as part of larger efforts to strengthen foreign investment screening and supply chain intelligence and risk management.14

Third, national efforts to legislate and regulate data protection and privacy must integrate national security considerations, such as limiting the ability of hostile foreign actors to acquire sensitive data on Americans on the commercial market.15

3. Accelerated Cyber Attacks.

Malware in the AI era will be able to mutate into thousands of different forms once it is lodged on a computer system. Such mutating polymorphic malware already accounts for more than 90% of malicious executable files.16 Deep RL tools can already find vulnerabilities, conceal malware, and attack selectively.17

While it is uncertain which methods will dominate, there is a clear path for U.S. adversaries to transform the effectiveness of cyber attack and espionage campaigns with an ensemble of new and old algorithmic means to automate, optimize, and inform attacks.18 This goes beyond AI-enhanced malware. Machine learning has current and potential applications across all the phases of cyber attack campaigns and will change the nature of cyber warfare and cyber crime.19

Vulnerabilities remain open in outdated infrastructure and medical devices, while new vulnerabilities are proliferating in 5G networks, billions of IoT devices, and in software supply chains.20

Though defensive applications of AI bring the promise to improve our national cyber defenses, AI can’t defend inherently vulnerable digital infrastructure. To address the present threat, Congress must continue implementing the Cyberspace Solarium Commission’s recommendations.21 With this foundation for cyber defense, the U.S. can prepare for expanding threats via testing and building the instrumented infrastructure required for AI-enabled cyber defenses, establishing better incentives for security, properly organizing to meet the challenge, and keeping attackers off balance.

The Government Should

Develop and deploy AI-enabled defenses against cyber attacks.

National security agencies need to acquire the sensors and instrumentation needed to train AI systems to detect and respond to threats on their networks. AI-enabled cyber defenses will also need large-scale, instrumented, and realistic testing, and they must be robust enough to withstand adversarial attacks. The defenses should be employed to expand machine speed information sharing, behavior-based anomaly detection, and malware mitigation across government networks.

To capitalize on these capabilities, the government should accelerate the establishment of a Joint Cyber Planning and Operations Center, modeled after the National Counterterrorism Center.22 The Center would serve as a centralized cyber intelligence sharing and collaboration unit with multi-agency jurisdiction and authorities to investigate threats, proactively support defensive mitigations, and coordinate responses.

4. Adversarial AI.

AI systems represent a new target for attack. While we are on the front edge of this phenomenon, commercial firms and researchers have documented attacks that involve evasion, data poisoning, model replication, and exploiting traditional software flaws to deceive, manipulate, compromise, and render AI systems ineffective.23 This threat is related to, but distinct from, traditional cyber activities, because AI systems will be vulnerable to adversarial attacks from any domain where AI augments action—civilian or military.24

Given the reliance of AI systems on large data sets and algorithms, even small manipulations of these data sets or algorithms can lead to consequential changes for how AI systems operate. The threat is not hypothetical: adversarial attacks are happening and already impacting commercial ML systems.25

With rare exceptions, the idea of protecting AI systems has been an afterthought in engineering and fielding AI systems, with inadequate investment in research and development.26 Only three of 28 organizations recently surveyed have “the right tools in place to secure their ML systems.”27 There has not yet been a uniform effort to integrate AI assurance across the entire U.S. national security enterprise.

The Government Should

Create a National AI Assurance Framework.

All government agencies will need to develop and apply an adversarial ML threat framework to address how key AI systems could be attacked and should be defended. An analytical framework can help to categorize threats to government AI systems and assist analysts with detecting, responding to, and remediating threats and vulnerabilities.28

Create dedicated red teams for adversarial testing.

Red teams should assume an offensive posture, trying to break systems and make them violate rules for appropriate behavior. Because of the scarcity of required expertise and experience for AI red teams, DoD and ODNI should consider establishing government-wide communities of AI red-teaming capabilities that could be applied to multiple AI developments.29

5. AI-Enabled Biotechnology.

Biology is now programmable. New technologies such as the gene editing tool CRISPR ushered in an era where humans are able to edit DNA. Combined with massive computing power and AI, innovations in biotechnology may provide novel solutions for mankind’s most vexing challenges, including in health, food production, and environmental sustainability. Like other powerful technologies, however, applications of biotechnology can have a dark side.

The COVID-19 pandemic reminded the world of the dangers of a highly contagious pathogen. AI may enable a pathogen to be specifically engineered for lethality or to target a genetic profile—the ultimate range and reach weapon. Also, AI, when applied to biology, could optimize for the physiological enhancement of human beings, including intelligence and physical attributes.

Individuals, societies, and states will have different moral and ethical views and accept different degrees of risk in the name of progress, and U.S. competitors are comparatively likely to take more risk-tolerant actions and conform less rigidly to bioethical norms and standards. China understands the tremendous upside associated with leading the bio revolution. Massive genomic data sets at places like BGI Group (formerly known as the Beijing Genomics Institute), coupled with China’s now-global genetic data collection platform and “all-of-nation” approach to AI, will make them a formidable competitor in the bio realm.30

The Government Should

Increase the profile of biosecurity and biotechnology issues within U.S. national security agencies.

Given how AI will substantially increase the rate of technical advancement in biotechnology, the government should update the National Biodefense Strategy to include a wider vision of biological threats, such as human enhancement, exploitation of genetic data for malicious ends, and ways U.S. competitors could utilize biotechnology or biodata advantages for novel purposes.

Additionally, U.S. officials should warn of the dangers associated with foreign actors obtaining personal genetic information, specifically highlighting concerns about the links between BGI and the Chinese government.31



1 A threat can be understood as an adversary capability paired with a vulnerability that can create a harmful consequence. See Terry L. Deibel, Foreign Affairs Strategy: Logic for American Statecraft, Cambridge University Press at 142-150 (2007). Threats can be graded further by their seriousness, likelihood, imminence, and tractability. 2 Stuart A. Thompson & Charlie Warzel, Twelve Million Phones, One Dataset, Zero Privacy, New York Times (Dec. 19, 2019), For example, the internet of things (IoT) and AI-powered applications can turn your new robotic vacuum into a listening device. See Sriram Sami, et al., Spying with Your Robot Vacuum Cleaner: Eavesdropping via Lidar Sensors, Proceedings of the 18th Conference on Embedded Networked Sensor Systems (Nov. 2020), 3 This is in some ways analogous to what Cold War strategists called “counter-value targeting.” See Lawrence Freedman, The Evolution of Nuclear Strategy, Palgrave Macmillan Vol. 20 at 119-122 (1989). In the realm of nuclear strategy, this was also known as counter-city or counter-economy targeting. 4 Some observers have used the concept of “sharp power” to describe such efforts to wield influence in open societies. These uses of power are sharp “in the sense that [authoritarian states aim to] pierce, penetrate, or perforate the information environments in the targeted countries.” Sharp Power: Rising Authoritarian Influence, National Endowment for Democracy at 13 (Dec. 5, 2017), See also Testimony of Dr. Eric Horvitz, Microsoft, before the U.S. Senate Committee on Commerce, Science, & Transportation, Subcommittee on Space, Science, & Competitiveness, Hearing on the Dawn of Artificial Intelligence at 13 (Nov. 30, 2016), 5 Some have characterized AI-driven information operations as “computational propaganda.” See Matt Chessen, The MADCOM Future: How Artificial Intelligence Will Enhance Computational Propaganda, Reprogram Human Culture, and Threaten Democracy… and What Can Be Done About It, Atlantic Council (Sept. 2017), Future_RW_0926.pdf. 6 See Pub. L. 116-92, National Defense Authorization Act for Fiscal Year 2020, 133 Stat. 1198, 2129 (2019). 7 These include the Media Forensics (MediFor) and Semantic Forensics (SemaFor) programs. See Dr. Matt Turek, Media Forensics, DARPA (last accessed Jan.10, 2021),; Dr. Matt Turek, Semantic Forensics, DARPA (last accessed Jan.10, 2021),

8 See, e.g., Paul England, et al., AMP: Authentication of Media via Provenance, arXiv (June 20, 2020), 9 See, e.g., Paul England, et al., AMP: Authentication of Media via Provenance, arXiv. 10 Robert Williams has described how policy makers face an “innovation-security conundrum,” one aspect of which is “the worry that data privacy and national security are increasingly interconnected. Data (and data networks) can be exploited in ways that threaten security, but they also form the lifeblood of technological innovation on which both economic growth and national security depend.” Robert D. Williams, Crafting a Multilateral Technology and Cybersecurity Policy, Brookings at 1 (Nov. 2020), 11 Ellen Nakashima, With a Series of Major Hacks, China Builds a Database on Americans, Washington Post (June 5, 2015), 12 Another example of an adversary acquiring significant data on U.S. individuals is the hack of the credit reporting agency Equifax. Press Release, Department of Justice, Chinese Military Personnel Charged with Computer Fraud, Economic Espionage and Wire Fraud for Hacking into Credit Reporting Agency Equifax (Feb. 10, 2020),; Aruna Viswanatha, et al., Four Members of China’s Military Indicted Over Massive Equifax Breach, Wall Street Journal (Feb. 11, 2020), 13 On privacy attacks, see Maria Rigaki & Sebastian Garcia, A Survey of Privacy Attacks in Machine Learning, arXiv (July 15, 2020), 14 The Committee on Foreign Investment in the United States (CFIUS) has the authority to review transactions that include “sensitive personal data of United States citizens that may be exploited in a manner that threatens national security.” For background, see Laura Jehl, Spotlight on Sensitive Personal Data As Foreign Investment Rules Take Force, The National Law Review (Feb. 18, 2020), The National Counterintelligence and Security Center (NCSC) includes “sensitive government data, and personally-identifiable information” in its conception of key supply chain risks. See Supply Chain Risk Management: Reducing Threats to Key U.S. Supply Chains, NCSC at 3 (2020), 15 See, e.g., Graham Webster, App Bans Won’t Make U.S. Security Risks Disappear, MIT Technology Review (Sept. 21, 2020),

16 Nicholas Duran, et al., 2018 Webroot Threat Report, Webroot at 6 (2018), 17 Gary J. Saavedra, et al., A Review of Machine Learning Applications in Fuzzing, arXiv (Oct. 9, 2019),; Isao Takaesu, Machine Learning Security: DeepExploit, GitHub (Aug. 29, 2019),; Marc Ph. Stoecklin, et al., DeepLocker: How AI Can Power a Stealthy New Breed of Malware, Security Intelligence (Aug. 8, 2018), 18 Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop, National Academies of Sciences, Engineering, and Medicine (2019),; Nektaria Kaloudi & Jingyue Li, The AI-Based Cyber Threat Landscape, ACM Computing Surveys at 1-34 (Feb. 2020),; Ben Buchanan, et al., Automating Cyber Attacks, Center for Security and Emerging Technology (Nov. 2020),; Dakota Cary & Daniel Cebul, Destructive Cyber Operations and Machine Learning, Center for Security and Emerging Technology at 5-23 (Nov. 2020), 19 Implications of Artificial Intelligence for Cybersecurity: Proceedings of a Workshop, National Academies of Sciences, Engineering, and Medicine (2019), 20 The recent SolarWinds attack demonstrates deep vulnerabilities in our software supply chains. See Joint Statement by the Federal Bureau of Investigation (FBI), the Cybersecurity and Infrastructure Security Agency (CISA), and the Office of the Director of National Intelligence (ODNI), Office of the Director of National Intelligence (Dec. 16, 2020), 21 Cyberspace Solarium Commission Report, U.S. Cyberspace Solarium Commission (March 2020), 22 See recommendation 5.4 in Cyberspace Solarium Commission Report, U.S. Cyberspace Solarium Commission at 87 (March 2020), 23 Adversarial AI Threat Matrix: Case Studies, GitHub (last accessed Jan. 10, 2021), For more on applications of adversarial AI, see Naveed Akhtar & Ajmal Mian, Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey, IEEE (March 28, 2018), 24 Adversarial AI is about what can be done to AI systems. The science of protecting and defending AI applications against attacks is called “AI Assurance.” The science of attacking each technological component of AI is called “Counter-AI.” 25 Ram Shankar Siva Kumar & Ann Johnson, Cyberattacks Against Machine Learning Systems Are More Common Than You Think, Microsoft Security (Oct. 22, 2020), 26 It has been estimated that less than 1% of AI R&D funding is directed toward the security of AI systems. See Nathan Strout, The Three Major Security Threats to AI, Center for Security and Emerging Technology (Sept. 10, 2019), 27 Ram Shankar Siva Kumar, et al., Adversarial Machine Learning—Industry Perspectives, arXiv at 2 (May 21, 2020),

28 There are various ongoing public and private efforts including, for instance, the MITRE-Microsoft adversarial ML framework. See Ram Shankar Siva Kumar & Ann Johnson, Cyberattacks Against Machine Learning Systems Are More Common Than You Think, Microsoft Security (Oct. 22, 2020),; Adversarial AI Threat Matrix: Case Studies, MITRE (last accessed Jan. 10, 2021), 29 For a similar recommendation, see Michèle Flournoy, et al., Building Trust Through Testing, WestExec Advisors at 27 (Oct. 2020), (Flournoy, et al., argue for “a national AI and ML red team as a central hub to test against adversarial attacks, pulling together DoD operators and analysts, AI researchers, T&E [Central Intelligence Agency (CIA), Defense Intelligence Agency (DIA), National Security Agency (NSA)], and other IC components, as appropriate. This would be an independent red-teaming organization that would have both the technical and intelligence expertise to mimic realistic adversary attacks in a simulated operational environment.”) 30 BGI built and operates China National GeneBank, the Chinese government’s national genetic database. It also is a major global supplier of COVID-19 testing, which potentially provides access to large international genetic data sets; by June 30, 2020, it had supplied more than 35 million test kits to 180 countries, including the United States, and built 58 testing labs in 18 countries. See Kirsty Needham, Special Report: COVID Opens New Doors for China’s Gene Giant, Reuters (Aug. 5, 2020),

31 See Chapter 16 of this report for additional recommendations pertaining to the nexus of AI and biotechnology.