PART I: DEFENDING AMERICA IN THE AI ERA
Chapter 4: Autonomous Weapon Systems and Risks Associated with AI-Enabled Warfare
Following is a summary of Part I, Chapter 4 of the National Security Commission on Artificial Intelligence's final report. Use the below links to access a PDF version of the Chapter and the Commission's Full Report.
World military powers both large and small are pursuing artificial intelligence (AI)-enabled and autonomous weapon systems. Such systems have the potential to help commanders make faster, better, and more relevant decisions. They will enable weapon systems to be capable of levels of performance, speed, and discrimination that exceed human capabilities.
The increasing use of AI technologies in weapon systems has generated important questions regarding whether such systems are lawful, safe, and ethical.
Since 2014, the United Nations Convention on Certain Conventional Weapons (CCW) has held meetings among states parties to discuss the technological, military, legal, and ethical dimensions of “emerging technologies in the area of lethal autonomous weapon systems (LAWS).”1 Specifically, it is examining whether autonomous technologies will be capable of complying with International Humanitarian Law (IHL) and whether additional measures are necessary to ensure that humans maintain an appropriate degree of control over the use of force.
The Commission has consulted with civil society, academic organizations, and government agencies in studying the legal, ethical, and strategic questions that surround AI-enabled and autonomous weapon systems, including their potential military benefits and risks, possible ethical issues coming to the fore, international efforts to regulate them, and their compliance with IHL. The Commission offers the following four judgments to reflect its conclusions on these discussions.
NSCAI Judgments Regarding AI-Enabled
and Autonomous Weapon Systems
While the Commission believes that properly designed, tested, and utilized AI-enabled and autonomous weapon systems will bring substantial military and even humanitarian benefit, the unchecked global use of such systems potentially risks unintended conflict escalation and crisis instability.
Therefore, countries must take actions which focus on reducing risks associated with AI-enabled and autonomous weapon systems and encourage safety and compliance with IHL when discussing their development, deployment, and use. Such efforts should and must be led by the United States, which is uniquely situated to lead them given its technical expertise, military prowess, and clear and transparent policies and ethical principles governing the deployment and use of AI-enabled and autonomous weapon systems.
The Commission presents the following five recommendations regarding actions the United States should take to mitigate risks associated with AI-enabled and autonomous weapon systems.
The Government Should
1. Clearly and publicly affirm existing U.S. policy that only human beings can authorize employment of nuclear weapons, and seek similar commitments from Russia and China.
The United States should make a clear, public statement that decisions to authorize nuclear weapons employment must only be made by humans, not by an AI-enabled or autonomous system, and should include such an affirmation in the DoD’s next Nuclear Posture Review.2
This would cement and highlight existing U.S. policy, which states that “[t]he decision to employ nuclear weapons requires the explicit authorization of the President of the United States.”3 It would also demonstrate a practical U.S. commitment to employing AI and autonomous functions in a responsible manner, limiting irresponsible capabilities, and preventing AI systems from escalating conflicts in dangerous ways. It could also have a stabilizing effect, as it would reduce competitors’ fears of an AI-enabled, bolt-from-the-blue strike from the United States and could incentivize other countries to make equivalent pledges.
The United States should also actively press Russia and China, as well as other states that possess nuclear weapons, to issue similar statements. Although joint political commitments that only humans will authorize employment of nuclear weapons would not be verifiable, they could still be stabilizing, responding to a classic prisoner’s dilemma: as long as countries have confidence that others are not building risky command and control structures that have the potential to inadvertently trigger massive nuclear escalation, they would have less incentive to develop such systems themselves.4 While this norm is widely accepted in the United States, it is unclear if Russia and China share the same strategic concerns.
2. Discuss AI’s impact on crisis stability in the existing U.S.-Russia Strategic Security Dialogue (SSD) and create an equivalent meaningful dialogue with China.
“These dialogues could also plant the seeds for a future, standing dialogue exclusively focused on establishing practical and concrete confidence building measures surrounding AI-enabled and autonomous weapon systems.”
The SSD is an interagency bilateral dialogue focused on reducing misunderstandings and misperceptions on key strategic issues and threats, as well as reducing the likelihood of inadvertent escalation. Although the dialogue has traditionally focused on nuclear arms control and doctrine, it has recently been used to also discuss emerging technologies and space security.5
The United States has no equivalent dialogue with China, as China has resisted U.S. attempts to establish one for nearly a decade. However, within the last year there has been increasing evidence that China is interested in formal talks with the United States concerning AI-enabled military systems.6 This interest should be cultivated and leveraged into establishing a U.S.-China SSD that includes the relevant military, diplomatic, and security officials from both sides.
The United States should use this channel to highlight how deploying unsafe systems could risk inadvertent conflict escalation, emphasize the need to conduct rigorous TEVV, and discuss where each side sees risks of a conventional conflict rapidly escalating in order to better anticipate future responses in a crisis. These dialogues could also plant the seeds for a future, standing dialogue exclusively focused on establishing practical and concrete confidence building measures surrounding AI-enabled and autonomous weapon systems
3. Work with allies to develop international standards of practice for the development, testing, and use of AI-enabled and autonomous weapon systems.
This could build off of existing work, to include the 11 Guiding Principles agreed to by the LAWS Group of Governmental Experts (GGE) in 2019,29 DoDD 3000.09, the DoD Ethical Principles for AI, and the NSCAI Key Considerations for Responsible Development and Fielding of AI.7
As part of this effort, the DoD Law of War Working Group should meet regularly to review any future technical developments that pertain to autonomous weapon systems and IHL, and the tri-chaired Steering Committee on Emerging Technology (separately recommended by the Commission in Chapter 3 of this report) should advise on how such future technical developments impact policy and national defense.
The outputs of both groups should inform future DoD engagements with both allies and competitors on AI-enabled and autonomous weapon systems. Obtaining allied consensus regarding standards for the development, testing, and use of such systems will set important norms regarding these systems, help to ensure they are developed and used safely, and further highlight the commitment of the United States and its allies to ethical and responsible uses of AI.
The United States should also use these consultations to highlight the ways in which AI will become a crucial part of future military operations and develop common frameworks guiding the appropriate and responsible use of AI-enabled and autonomous weapon systems on the battlefield. This should seek to incentivize allies to invest in the digital modernization of their own forces while also highlighting the risks to military interoperability should any ally agree to join a treaty prohibiting LAWS.
4. Pursue technical means to verify compliance with future arms control agreements pertaining to AI-enabled weapon systems.
Although arms control of AI-enabled weapon systems is currently technically unverifiable, effective verification will likely be necessary to achieve future legally binding restrictions on AI capabilities. DoD and the Department of Energy (DoE) should spearhead efforts to design and implement technologies which could provide other countries confidence that an AI-enabled and autonomous weapon system is working as intended without revealing sensitive operational details.
5. Fund research on technical means to prevent proliferation of AI-enabled and autonomous weapon systems.
Controlling the proliferation of AI-enabled and autonomous weapon systems poses significant challenges given the open-source, dual-use, and inherently transmissible nature of AI algorithms.8 The proliferation of makeshift autonomous weapon systems which primarily utilize commercial components will be particularly difficult to control via regulation and will necessitate capable intelligence sharing and domestic law enforcement efforts to prevent their use by terrorists and other non-state actors.
Regarding more sophisticated autonomous weapon systems, the United States should double down on efforts to design and incorporate proliferation-resistant features, such as standardized ways to prevent unauthorized users from utilizing such weapons, or reprogramming a system’s functionality by changing key system parameters.
Footnotes
1 Background on Lethal Autonomous Weapon systems in the CCW, United Nations (last accessed Jan. 11, 2021), https://www.unog.ch/80256EE600585943/(httpPages)/8FA3C2562A60FF81C1257CE600393DF6?OpenDocument.
2 The Commission recognizes that AI should assist in some aspects of the nuclear command and control apparatus, such as early warning, early launch detection, and multi-sensor fusion to validate single sensor detections and potentially eliminate false detections.
3 Nuclear Matters Handbook 2020, Office of the Deputy Assistant Secretary of Defense for Nuclear Matters at 18 (2020), https://fas.org/man/eprint/nmhb2020.pdf.
4 There could be other reasons countries may delegate nuclear weapons launch authority to autonomous systems, particularly if leadership trusts machines to execute launch orders more than humans. A political agreement is unlikely to be able to address these concerns, although offering it would highlight how other nations are engaging in irresponsible and dangerous behavior.
5 Press Release, U.S. Department of State, Deputy Secretary Sullivan’s Participation in Strategic Security Dialogue with Russian Deputy Foreign Minister Sergey Ryabkov (July 17, 2019), https://2017-2021.state.gov/deputy-secretary-sullivans-participation-in-strategic-security-dialogue-with-russian-deputy-foreign-minister-sergey-ryabkov/index.html; Press Release, U.S. Department of State, The United States and Russia Hold Space Security Exchange (July 28, 2020), https://2017-2021.state.gov/the-united-states-and-russia-hold-space-security-exchange/index.html.
6 Over the last year, Chinese experts have participated actively in several Track II dialogues with U.S. experts on the safety of military AI systems, potentially signaling a desire for formal government-to-government communication on these issues.
7 See the Appendix of this report containing the abridged version of NSCAI’s Key Considerations for Responsible Development & Fielding of AI. For additional details on the Commission’s recommendation for future action on International collaboration and cooperation, see the section on “System Performance” in Key Considerations for Responsible Development & Fielding of Artificial Intelligence: Extended Version, NSCAI (2021) (on file with the Commission).
8 See Chapter 14 of this report for additional information on the difficulty of using export controls to prevent the transfer of AI algorithms.