icon-carat-right menu search cmu-wordmark

AI Trust and Autonomy Labs Fill the Gap Between AI Breakthroughs and DoD Deployment

AI Trust and Autonomy Labs Fill the Gap Between AI Breakthroughs and DoD Deployment
Article

June 4, 2024—The Department of Defense (DoD) has recognized the potential benefits of using artificial intelligence (AI) and machine learning (ML) systems. At the same time, recent government guidance urges responsibility with this rapidly changing technology. To help agencies and the defense industrial base meet the government’s AI needs, the SEI has recently formalized two laboratories in key areas: AI trust and AI autonomy.

Last year, the White House issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, and the DoD released its Data, Analytics, and Artificial Intelligence Adoption Strategy. These guidelines are just two signals of the federal government’s desire to leverage AI systems, but with care. As the DoD’s only federally funded research and development center focused (FFRDC) on software, the SEI is working to satisfy the department’s cautious appetite.

A Holistic View of ML, AI, and Autonomy

Also in 2023, the DoD updated its Directive 3000.09, “Autonomy in Weapon Systems.” The directive requires, among other things, that systems demonstrate realistic performance, capability, reliability, effectiveness, and suitability and are consistent with the DoD’s AI ethical principles and Responsible Artificial Intelligence Strategy and Implementation Pathway.

Creating armed or unarmed autonomous systems that meet these standards will require specialized AI engineering. The SEI’s AI for Autonomy Lab researches and demonstrates the application of AI-related technologies for improving the performance of autonomy systems. The lab brings together experts in AI and ML, software engineering, test and evaluation, national security, cybersecurity, and systems in the air, sea, space, and land domains. The group also has facilities for dedicated computing and for testing of robots and other hardware. The lab’s initial focus will be on individual uncrewed vehicles, also called unmanned vehicles for air, sea, and land (UxVs), and on teams of collaborative UxVs.

The armed services were researching autonomous systems even before the Defense Advanced Research Project Agency’s (DARPA’s) Grand Challenges in the early 2000s, which were long-distance navigation competitions for autonomous ground vehicles. But the last decade has seen advances in AI and ML as well as increasing investment in autonomous technology by rival nation states. The DoD has also increased the tempo of its autonomy R&D, such as for transport and sustainment support in the Army, the Navy’s cooperative combat aircraft (CCA), and the Air Force’s Replicator program.

SEI researchers have been working on autonomy-related projects since before the formation of the SEI’s AI Division in 2021. Recently, autonomy has become more strategically important to the DoD, said John Stogoski, the Autonomy Lab’s technical lead, so the SEI created the Autonomy Lab in fall 2023. “Having a team focusing on these unique problem areas and on growing relationships in the community is essential,” said Stogoski. The lab currently comprises a handful of SEI specialists and interns, and Stogoski said they are looking to hire more.

The lab takes a holistic view of how systems should use embedded ML models and AI to solve autonomy problems. Three major areas of focus include planning and control, simulation methods, and evaluation practices. “We’re not trying to solve all of the issues with autonomy, uncrewed vehicles, or robots,” said Stogoski. “We want to help customers use AI in a smart way that ensures it’s supportable, provides value, and advances capability.”

The AI for Autonomy Lab is currently partnering with two components of the Carnegie Mellon University Robotics Institute: AirLab, an autonomous robot research group, and the National Robotics Engineering Center. Stogoski said the lab is ready to focus on sponsored work and to expand relationships with DoD agencies, academic research groups, FFRDCs, university-affiliated research centers, and vendors of AI and autonomy solutions.

The Human-AI Trust Challenge

Whether in the battlefield or the back office, military service members must understand the capabilities of AI systems to determine if they will perform correctly in a given context. The SEI’s new AI Trust Lab advances the development of trustworthy AI through accelerated research and collaboration. The lab develops frameworks, tools, and design guidelines driven by trustworthy, human-centered, and responsible AI engineering practices.

The SEI’s AI Division has conducted significant work on AI trustworthiness with the DoD’s Chief Digital and Artificial Intelligence Office (CDAO) and frequent SEI collaborator the Defense Innovation Unit (DIU). AI Trust Lab lead Carol Smith and her colleagues have for years studied aspects of AI trustworthiness and how to make systems that people are willing to be responsible for. The eight SEI experts in the newly formed Trust Lab have backgrounds as diverse as anthropology, visual design, and human-computer interaction. “We’re bringing all those skills together so we can focus on big problems and work to solve them,” said Smith.

These problems revolve around the human-centered pillar of AI engineering. “Our R&D supports the piece that is often missing in AI, what happens between the people and the glass,” said Smith. “We look at how developers and users of AI can have enough information to determine if the system is trustworthy.”

The Trust Lab develops techniques to provide evidence that an AI-enabled system will interact with humans in ways deemed acceptable and usable in DoD missions. Some of these techniques ensure that AI systems are developed responsibly and align with ethical principles. Much of the work revolves around user interfaces, user experiences, visualizations, and explainability techniques that have been empirically shown to enhance the trustworthiness of AI systems. The lab also develops rigorous engineering practices, procedures, and standards for clearly stating, documenting, and reducing the risks associated with deploying AI systems.

Trustworthiness is a challenge for AI systems because they are dynamic and do not always provide information reliably or repeatably, Smith explained. “The interactions are going to be complex, and the information is going to need measures of confidence.”

One of the Trust Lab’s top priorities is the development of measures for AI usability, transparency and explainability, likelihood of failure, and equity, justice, and fairness. In this effort they collaborate with the DIU, DARPA, and the National Institute of Standards and Technology (NIST).

This work combines engineering and sociotechnical approaches. For example, the Trust Lab could collaborate with data scientists and ML experts to develop measures of confidence for an ML system. The team would then design ways to display the confidence measures and test how they affect operator trust in the system. Currently, the lab is developing a toolkit to help product teams triage the risks of adding an AI component to existing systems.

“We have to understand what the human is trying to achieve with the system, what kinds of information will help them, and how to help the technology meet those needs,” said Smith.

A More Complete Approach to AI R&D for Defense

Both the AI Trust Lab and AI for Autonomy Lab support the Center for Calibrated Trust Measurement and Evaluation (CaTE). In 2023, the Office of the Under Secretary of Defense for Research and Engineering (OUSD(R&E)) and the SEI launched CaTE to establish methods for assuring trustworthiness in AI systems with emphasis on interaction between humans and autonomous systems.

The two new labs join the AI Division’s Adversarial ML Lab and Advanced Computing Lab. “With the addition of the Autonomy and Trust labs, the AI Division adds two key focus areas that enhance our work in promoting and advancing the art and practice of AI engineering,” said Eric Heim, the SEI AI Division’s chief scientist. “We strive towards making AI more robust, secure, scalable, and human-centered, all key attributes in ensuring that DoD missions can use AI successfully and responsibly. Each of our four labs represents important focus areas of fundamental and applied research to fill the gap between the latest breakthroughs in AI and their use in the DoD.”

Learn more about the SEI’s research in AI trust and autonomy through our blog, podcasts, and digital library. To find out about opportunities to collaborate with the Trust Lab or Autonomy Lab, contact us through our website.

Photo: Patrick Hunter