icon-carat-right menu search cmu-wordmark

AI Trust Lab: Trustworthy AI for a Safer Nation

Created October 2019 • Updated February 2025

To support Department of Defense (DoD) mission success, the Software Engineering Institute’s Trust Lab advances trustworthy and human-centered artificial intelligence (AI) engineering practices. We work with experts and stakeholder organizations to streamline their practices and to create tools to improve AI-enabled and autonomous system interfaces.

AI Must Work with—and for—People

As DoD functions increasingly seek to leverage the strategic advantage AI systems can facilitate, AI trustworthiness—knowing that the AI systems will operate as expected—is essential. AI holds great promise to empower users with knowledge and augment effectiveness; however, that promise comes with challenges and risks. For example, imbalance in datasets can reduce system effectiveness, shortcomings in transparency and explainability can lead to mis- or dis-use of AI, and systems can be threats to security if they reveal protected information without proper permission. We can—and must—ensure that we keep users safe and in control, particularly with regard to government and public sector applications that affect broad populations and operate in high-stakes contexts. How can AI development teams harness the power of AI systems and design them to be valuable to humans?

Trustworthy systems are built to fit with mission and user needs, use appropriate data, and are reliable, robust, and secure. The challenge of developing and designing trustworthy, human-centered AI systems is a key focus for the AI engineering discipline, and the SEI established the Trust Lab to advance practices in engineering for trustworthy AI.

Advancing Trustworthiness Through Collaboration and Research

The Trust Lab collaborates with organizations, including the U.S. Defense Innovation Unit (DIU), the U.S. Chief Data and Artificial Intelligence Office (CDAO), DARPA, and NIST, to develop techniques to provide evidence that an AI-enabled system will interact with humans in ways that are useful in DoD missions. Much of the work revolves around user interfaces, user experiences, visualizations, and explainability techniques that have been empirically shown to enhance the trustworthiness of AI systems. The lab also develops streamlined engineering practices, procedures, and standards for clearly stating, documenting, and reducing the risks associated with deploying AI systems.

Additionally, because trustworthiness often depends on the data powering the AI system, the Trust Lab team is currently conducting research to streamline data inspection and to help machine learning (ML) engineers and development teams fully assess datasets.

AI Trust Lab: Trustworthy AI for a Safer Nation

Learn More

Trust and AI Systems

Podcast

Carol Smith, a senior research scientist in human machine interaction, and Dustin Updyke, a senior cybersecurity engineering in the SEI’s CERT Division, discuss the construction of trustworthy AI systems and factors influencing human trust of AI systems.

Listen

What is Explainable AI?

Blog Page

READ

Implementing the DoD's Ethical AI Principles

Podcast

In this SEI podcast, Alex Van Deusen and Carol Smith, both with the SEI's AI Division, discuss a recent project in which they helped the Defense Innovation Unit of the U.S. Department of Defense to develop guidelines for the responsible use of AI.

Listen