NATIONAL SECURITY COMMISSION ON ARTIFICIAL INTELLIGENCE FINAL REPORT
Artificial Intelligence in Context
Artificial Intelligence (AI) is not a single piece of hardware or software, but rather a constellation of technologies. To address such a broad topic, the Commission’s legislative mandate provided guidance on how to scope its work to include technologies that solve tasks requiring human-like perception, cognition, planning, learning, communication, or physical action; and technologies that may learn and act autonomously, whether in the form of software agents or embodied robots.1
Successful development and fi elding of AI technologies depends on a number of interrelated elements that can be envisioned as a stack.2
Talent
We regard talent as the most essential requirement because it drives the creation and management of all the other elements.
Data
Data is critical for most AI systems.3 Labeled and curated data enables much of current machine learning (ML) used to create new applications and improve the performance of existing AI applications.
Hardware
The underlying hardware provides the computing power to analyze ever-growing data pools and run applications. This hardware layer includes cloud-based compute and storage, supported by a networking and communications backbone, instrumental for connecting smart sensors and devices at the network edge.
Algorithims
Algorithms are the mathematical operations that tell the system how to navigate the data to provide answers in response to specifi c questions.
Applications
An application makes the answers useful for specific tasks.
Integration
Integration of these elements is critical to fi elding a successful end-to-end AI system. This requires signifi cant engineering talent and investment to integrate existing data fl ows, decision pipelines, legacy equipment, testing designs, etc. This task of integration can be daunting and historically has been underestimated.4
AI technologies and applications such as pattern recognition, ML, computer vision, natural language understanding, and speech recognition have evolved for many decades. In the early years of AI, the period the Defense Advanced Research Projects Agency (DARPA) describes as the “first wave,” researchers explored many approaches, including symbolic logic, expert systems, and planning. Some of the most effective results were based on “handcrafted knowledge” defined by humans and then used by the machine for reasoning and interacting.5
Within the past 10 years, we have witnessed a “second wave” of AI, propelled by large-scale statistical ML that enables engineers to create models that can be trained to specific problem domains if given exemplar data or simulated interactions. Learning from data, these systems are designed to solve specific tasks and achieve particular goals with competencies that, in some respects, parallel the cognitive processes of humans: perceiving, reasoning, learning, communicating, deciding, and acting. Today most fielded large-scale AI systems employ elements of both first- and second-wave AI approaches.
Age of Deployed AI.
Today, we have reached an inflection point. Global digital transformation has led to an overwhelming supply of data. Statistical ML algorithms, particularly deep neural networks, have matured as problem solvers—albeit with limitations.6 The powerful and networked computing that fuels ML capabilities has become widely available. The convergence of these factors now places this capable technology in the hands of the technical and non-technical alike. The fundamental “question is no longer how this technology works, but what it can do for you.”7
While the current technology still has significant limitations, it is well-suited for certain use cases. We have entered the age of deployed AI. AI is now ubiquitous, embedded in devices we use and interact with on a daily basis—for example, in our smartphones, wireless routers, and cars. We routinely rely on AI-enriched applications, whether searching for a new restaurant, navigating traffic, selecting a movie, or getting customer service over the phone or online.
Forecasting the future of AI is difficult. Five years ago, few would have predicted the recent breakthroughs in natural language understanding that have resulted in systems that can generate full text almost indistinguishable from human prose.8 With a remarkable increase of investments in the global AI industry over the past five years9 and an unprecedented amount of general R&D dollars being invested worldwide,10 there is no AI slowdown in sight—only new horizons for deployed AI.
Frontiers of AI Technology.
The next decade of AI research will likely be defined by efforts to incorporate existing knowledge, push forward novel ways of learning, and make systems more robust, generalizable, and trustworthy.11 Research on advancing human-machine teaming will be at the forefront, as will improvements in hybrid AI techniques, enhanced training methods, and explainable AI.
Looking to an AI-Enabled Future.
Following the trajectories of the research threads outlined above sketches a future in which AI empowers humanity in unprecedented ways, unlocking capabilities across science, education, space technology, healthcare, infrastructure, manufacturing, agriculture, entertainment, and countless other sectors. For example, advances in natural language understanding could enable real-time, ubiquitous translation for more obscure languages for which written and spoken training data is limited.13 This would transform the way we communicate across geographic and cultural barriers, enabling business, diplomacy, and free exchange of ideas.
Breakthroughs in integration of multi-modal, multi-source data could enable real-time AI-driven modeling and simulation for federal responses to crises including pandemics and natural disasters.14 Drone feeds augmented with maps, building layouts, and other visual data layers could empower first responders with lifesaving emergency-scene understanding,15 and AI could help build response plans, expedite command and control, and optimize logistics for a range of disaster-response scenarios.16
Footnotes
1 The John S. McCain National Defense Authorization Act for Fiscal Year 2019 includes the following definition to guide the Commission’s work: 1. Any artificial system that performs tasks under varying and unpredictable circumstances without significant human oversight, or that can learn from experience and improve performance when exposed to data sets. 2. An artificial system developed in computer software, physical hardware, or other context that solves tasks requiring human-like perception, cognition, planning, learning, communication, or physical action. 3. An artificial system designed to think or act like a human, including cognitive architectures and neural networks. 4. A set of techniques, including machine learning that is designed to approximate a cognitive task. 5. An artificial system designed to act rationally, including an intelligent software agent or embodied robot that achieves goals using perception, planning, reasoning, learning, communicating, decision-making, and acting. See Pub. L. 115-232, 132 Stat. 1636, 1965 (2018). 2 Andrew W. Moore, et al., The AI Stack: A Blueprint for Developing and Deploying Artificial Intelligence, Proc. SPIE 10635, Ground/Air Multisensor Interoperability, Integration, and Networking for Persistent ISR IX, 106350C (May 4, 2018), https://doi.org/10.1117/12.2309483; see also Dave Martinez, et al., Artificial Intelligence: Short History, Present Developments, and Future Outlook, MIT Lincoln Laboratory at 27 (Jan. 2019), https://www.ll.mit.edu/media/9526. 3 Note that model-based AI requires data for the manual construction of the model(s). Typically, this involves less data than statistical machine learning, but more human effort. 4 Saleema Amershi, et al., Software Engineering for Machine Learning: A Case Study, ICSE-SEIP ’19 Proceedings of the 41st International Conference on Software Engineering at 291-300 (2019), https://2019.icse-conferences.org/details/icse-2019-Software-Engineering-in-Practice/30/Software-Engineering-for-Machine-Learning-A-Case-Study; D. Sculley, et al., Machine Learning: The High Interest Credit Card of Technical Debt, SE4ML: Software Engineering for Machine Learning (NIPS 2014 Workshop), https://ai.google/research/pubs/pub43146. 5 John Launchbury, A DARPA Perspective on Artificial Intelligence, DARPA, 4-7 (Feb. 2017), https://www.darpa.mil/attachments/AIFull.pdf. 6 The limitations of today’s statistical machine learning, as an example, include the vulnerability of unknowingly learning and amplifying biases in the training data; the fact that they are often complex models composed of a very large number of learned parameters, making them opaque and difficult to interpret; the fact that they are trained to solve narrow tasks and lack generalization to other related problems (such as when operationally encountered data fundamentally changes characteristic from the training data); and the fact that they require large amounts of labeled training data. 7 Andrew Moore, When AI Becomes an Everyday Technology, Harvard Business Review (June 7, 2019), https://hbr.org/2019/06/when-ai-becomes-an-everyday-technology. 8 Tom B. Brown, et al., Language Models are Few-Shot Learners, arXiv (July 22, 2020), https://arxiv. org/abs/2005.14165. 9 Zachary Arnold, et al., Tracking AI Investment: Initial Findings from the Private Markets, Center for Security and Emerging Technology (Sept. 2020), https://cset.georgetown.edu/wp-content/uploads/CSET-Tracking-AI-Investment.pdf. 10 According to UNESCO, global spending on R&D has reached a record high of almost US$1.7 trillion. See How Much Does Your Country Invest in R&D?, UNESCO Institute for Statistics (last accessed Jan. 7, 2021), http://uis.unesco.org/apps/visualisations/research-and-development-spending/. 11 For a recent debate from AI experts see AI DEBATE 2: Moving AI Forward: An Interdisciplinary Approach, Montreal Artificial Intelligence (Dec. 23, 2020), https://montrealartificialintelligence.com/aidebate2.html.
12 A goal of these new methods is to eliminate the need for many complex calculations that make traditional training very slow. Vincent Dutordoir, Sparse Gaussian Processes with Spherical Harmonic Features, arXiv (June 30, 2020), https://arxiv.org/abs/2006.16649.
13 Envisioned translation systems will leverage feedback to the system by actively correcting translation and recognition errors the software makes, improving performance as the interaction between translating parties goes on. A major goal is rapid deployment to new languages that have not been seen before. For example, Carnegie Mellon’s DIPLOMAT Project makes interactive speech translation possible through a new architecture called Multi Engine Machine Translation (MEMT). DIPLOMAT gives users the ability to provide translation corrections to support rapid development to new languages that have not been seen before. Robert Frederking, Interactive Speech Translation in the DIPLOMAT Project, Carnegie Mellon University Language Technologies Institute (last accessed Dec. 19, 2020), http://www.cs.cmu.edu/~air/papers/acl97-workshop.pdf. 14 Modeling and simulation can also help prepare for effective pandemic supply chain responses orchestrated by the government. Madhav Marathe, High Performance Simulations to Support Real-time COVID19 Response, SIGSIM-PADS ’20 (June 2020), https://dl.acm.org/doi/pdf/10.1145/3384441.3395993. 15 Edgybees (last accessed Dec. 19, 2020), https://edgybees.com/. 16 Department of Energy Announces the First Five Consortium, U.S. Department of Energy (Aug. 18, 2020), https://www.energy.gov/articles/department-energy-announces-first-five-consortium.