PART I: DEFENDING AMERICA IN THE AI ERA

Chapter 8: Upholding Democratic Values: Privacy, Civil Liberties, and Civil Rights in Uses of AI for National Security


Following is a summary of Part I, Chapter 8 of the National Security Commission on Artificial Intelligence's final report. Use the below links to access a PDF version of the Chapter, Blueprint for Action, and the Commission's Full Report.

The basic purpose of the American government is to protect the security and liberty of the American people.

Americans have a long tradition of debating how best to achieve these twin goals when tensions arise between them.

The two decades following 9/11 saw intensive efforts to calibrate the government’s powers to stop another terrorist attack with its obligations to respect individual rights and liberties.


Artificial intelligence (AI) is ushering in the next era of this debate because new technologies offer government agencies more powerful ways to collect and process information, track individuals’ behavior and movements, and act on the basis of computer-generated analyses.

“For the United States, as for other democratic countries, use of AI by officials must comport with principles of limited government and individual liberty.”

These principles do not uphold themselves. In a democratic society, any empowerment of the state must be accompanied by wise restraints to make that power legitimate in the eyes of its citizens. As this report argues, the promise of emerging AI technologies to enhance national security is real and significant.

The ability of U.S. intelligence, homeland security, and law enforcement agencies to develop and use them for national security purposes must be preserved. To do so, however, the government must ensure that their use is effective, legitimate, and lawful. Public trust will hinge on justified assurance about compliance with privacy, civil liberties, and civil rights.

Democratic AI Governance and Novel Challenges for Privacy, Civil Liberties, and Civil Rights.


The democratic model must prove its resilience in the face of emerging technological changes that could challenge it. Fundamentally, we are confident that the American system—and the rules, norms, and institutions that uphold it—can adapt to uphold the dual imperatives of security and liberty in the AI era.

For the Intelligence Community (IC), core features of that system include laws, rules, and procedures to minimize the collection, retention, and dissemination of U.S. persons’ data, as well as oversight from all three branches of government.1 Homeland security and law enforcement agencies likewise operate within frameworks of policy, oversight, and judicial review that guide border protection and criminal investigations. Ultimately, the actions of all federal agencies are subject to the Constitution’s guarantees.

Within this context, the advent of modern AI—and the novel capabilities it can bring to intelligence, homeland security, and law enforcement missions—raises a number of concerns and difficult questions and challenges with respect to the privacy, civil liberties, and civil rights of U.S. persons.

For example:

  • AI-powered analytics can help officials process and make sense of huge amounts of information, which can be aggregated to form a revealing “mosaic” picture of a person’s activities, whereabouts, and patterns of behavior.2 This could be highly useful to identify threats, but it has also raised questions about the proper scope and authorization for border or law enforcement searches.3
  • Much of this personal information is held by private companies. This fact of modern digital life has raised constitutional questions about whether and when individuals should have a “reasonable expectation of privacy” in the information they provide to third parties like technology firms—and questions about the circumstances in which that information may be accessed and utilized by intelligence, homeland security, or law enforcement agencies for a legitimate national security purpose.4
  • AI can help automate aspects of data collection and analysis. Such methods can augment the ability of analysts or investigators to sift through and triage masses of information to establish patterns or pinpoint threats. But they also raise questions about the proper roles of machine and human analysis in these processes, including for making predictive judgments. To the extent that an AI system’s functions are opaque, it may be difficult to trace and justify the computational process that led the system to make a recommendation.
  • AI models can evolve based on changing data and interaction with other models, leading to unexpected outcomes. As a result, AI systems require more continuous testing and evaluation than prior generations of technology.
  • Unintended bias can be introduced during many stages of the machine learning (ML) process, which can lead to disparate impacts in American society, a problem that has been documented in law enforcement contexts.5

Tenets for Managing AI Change.


This Commission will not endeavor to draw all of the lines for what may be permissible or wise in particular circumstances. However, important principles to follow in different national security contexts include the following:

Foreign Intelligence Collection and Analysis: The Office of the Director of National Intelligence (ODNI) AI Ethics Guidance to the Intelligence Community is an encouraging step, as it places strong emphasis on utilizing AI for foreign intelligence missions in ways that uphold the privacy and civil liberties of Americans.6 As these guidelines are implemented, it will be important to pay close attention to ensuring that data minimization, retention, and querying procedures are adequate and rigorously enforced.

Border Security: AI surveillance and analysis capabilities can make the government’s operations more efficient and effective at the borders and ports of entry. But to sustain public support for these uses, the Department of Homeland Security (DHS) must take care to ensure that automated screening processes lead agents only to the information they need and are authorized to access, and do not impermissibly single out individuals based on characteristics such as race or religion. Domestic Security and Public Safety: Rapid advances in AI-enabled technologies for law enforcement purposes, including biometric surveillance techniques such as facial recognition, may be outpacing rules for their proper use. The government must exercise special caution in managing risks to bedrock constitutional principles including equal protection, due process, freedom from unreasonable searches and seizures, and freedoms of speech and assembly.7

In carrying out these missions, it will be important to maintain clear distinctions between appropriate authorities in these different national security contexts. It is also important to gain greater public confidence by enhancing transparency, improving the performance and reliability of AI technologies, ensuring due process, and strengthening oversight.


With these Tenets in Mind

The Government Should Take the Following Steps

Invest in and adopt AI tools to enhance oversight and auditing in support of privacy and civil liberties.

Agencies should assess near-term opportunities and research gaps in applications of AI to address privacy and civil liberties challenges, such as ML techniques for classification, recommendation, anomaly detection, and other applications.8

Improve public transparency about how the government uses AI.

Existing transparency mechanisms could be utilized more effectively, and, in some cases, revised. New agency reporting requirements would also be beneficial.

Develop and test systems with the goal of advancing privacy preservation and fairness.

Although an ML system may meet requirements at a static point in time, ongoing compliance is not a given once the system is operational.

Strengthen the ability of those impacted by government actions involving AI to seek redress and have due process.

Agencies have to accept non-zero false positive and false negative rates in order to deploy any AI system.

Strengthen oversight mechanisms to address current and evolving concerns.

The advancement of AI requires a forward-looking approach to oversight that anticipates the continued evolution and adoption of new technologies and better positions the government to responsibly manage their employment well into the future.

Footnotes

1 For a compilation of Attorney General guidelines from the IC components, see Status of Attorney General Approved U.S. Person Procedure under E.O. 12333, ODNI (July 14, 2016), https://www.dni.gov/files/documents/Table_of_EO12333_AG_Guidelines%20for%20PCLOB_%20Updated%20July_2016.pdf. Elements of the IC oversight system include counsels and privacy officials within intelligence agencies, the Department of Justice, independent bodies such as the Privacy and Civil Liberties Oversight Board, Federal courts including the Foreign Intelligence Surveillance Court, and the House and Senate intelligence committees. 2 On the mosaic concept, see, e.g., Steven M. Bellovin, et al., When Enough Is Enough: Location Tracking, Mosaic Theory, and Machine Learning, NYU Journal of Law & Liberty, Vol. 8 (Sept. 3, 2013), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2320019. 3 For an informative discussion of evolving debates over Fourth Amendment regulation of government searches in the context of AI, see James E. Baker, The Centaur’s Dilemma: National Security Law for the Coming AI Revolution, Ch. 6 (Brookings, 2020). 4 Congress and the Judiciary will need to assess the adequacy of current legal constraints over the federal government’s obtainment and use of third-party data, including data acquired from data brokers. Either through evolving case law or legislation, agencies would benefit from clarity surrounding the Fourth Amendment’s application with respect to third-party data. On third-party doctrine, see Richard M. Thompson II, The Fourth Amendment Third-Party Doctrine, Congressional Research Service (June 5, 2014), https://fas.org/sgp/crs/misc/R43586.pdf.

5 Concerns about algorithmic error rates and disparate performance across age, skin tones, and genders are especially pronounced for facial recognition. See Patrick Grother, et al., Face Recognition Vendor Test, Part 3: Demographic Effects, NIST (Dec. 2019), https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf. The Gender Shades Project found that various facial recognition systems were very accurate for white men, but they were significantly less accurate for women and people of color (and worst for women of color). See Gender Shades (last accessed Jan. 11, 2021), http://gendershades.org/.

6 See Principles of Artificial Intelligence Ethics for the Intelligence Community, ODNI (last accessed Jan. 11, 2021), https://www.dni.gov/index.php/features/2763-principles-of-artificial-intelligence-ethics-for-the-intelligence-community.

7 Some observers have found a “chilling effect” that impacts the degree to which individuals exercise freedoms of expression, association, and assembly. See, e.g., Rachel Levinson-Waldman, Hiding in Plain Sight: A Fourth Amendment Framework for Analyzing Government Surveillance in Public, Emory Law Journal Vol. 66 (2017), https://scholarlycommons.law.emory.edu/elj/vol66/iss3/4/.

8 Xuning (Mike) Tang & Yihua Astle, The Impact of Deep Learning on Anomaly Detection, Law.com (Aug. 10, 2020), https://www.law.com/legaltechnews/2020/08/10/the-impact-of-deep-learning-on-anomaly-detection/.