search menu icon-carat-right cmu-wordmark

Designing Trustworthy AI for Human-Machine Teaming

Headshot of Carol Smith

Artificially intelligent (AI) systems hold great promise to empower us with knowledge and enhance human effectiveness. As Department of Defense (DoD) warfighters partner with AI systems more frequently, we will identify more opportunities to clarify the limits of AI and to set realistic expectations for these types of systems. As a senior research scientist in human-machine interaction at the SEI's Emerging Technology Center, I am working to further understanding of how humans and machines can better collaborate (i.e., team) to solve important problems and also understand our responsibilities and how that work continues once AI systems are operational. This blog post presents a framework I have developed (in the form of a checklist) to build upon the importance of diverse teams and ethical standards to ensure that AI systems are trustworthy and able to effectively augment warfighters.

The Importance of Diverse Teams and Ethical Standards in AI Systems

The experience of working with people who are significantly different from us can be challenging. Studies have shown that diverse teams are worth the effort, as working with people unlike ourselves increases our capacity for innovation and creative thinking. Diversity for teams creating AI systems is not just about demographics, mixing genders, race and disability status; though those attributes are important. Diversity for these teams means bringing talented, experienced people together who have a wide set of life experiences; skill sets; education; problem-framing approach and thinking process; disability status; social status; and experience being the "other." Talented individuals with a variety of life experiences will be better prepared to create systems for our diverse warfighters and the range of challenges they face. These individuals need to be truly different and feel accepted for those differences in inclusive environments--not simply tolerated.

For AI systems to be trustworthy, individuals within AI development teams need to draw on their commonalities. AI software teams should adopt a set of technology ethics, such as the ACM's Code of Ethics and Professional Conduct or the Montreal Declaration for a Responsible Development of Artificial Intelligence to help bridge differences between individuals. The DoD adopted Ethical Principles for AI in February 2020 based on the Defense Innovation Board's (DIB) set of AI ethical guidelines proposed in October 2019. In detailing these recommendations, the DoD stated that AI is a rapidly evolving field and "no organization that currently develops or fields AI systems or espouses AI ethics principles can claim to have solved all the challenges embedded" in the principles.

Having a shared set of technology ethics to coalesce around and from which commonalities can be drawn strengthens the team and their work. When designing AI systems that will ultimately impact human well-being (decisions that affect a person's life, quality of life, health, or reputation), people should always be involved (i.e., teaming) with developing and operating the AI systems and have ultimate authority over their intended behavior. When humans and machines work together, it is important to clearly differentiate which decisions are within the purview of an AI system and appropriate for the AI system to make and which decisions really must be made by a human so the context and humanity are preserved.

The Human-Machine Teaming Framework will help AI software teams working on defense problems address these challenges, as well as teams working in other environments, such as driverless vehicles, finance, and healthcare, all of which are contending with the rapid advancement of AI. This framework will help drive these conversations towards a clear understanding of what the expectations are in specific situations. Human-machine teams are strongest when humans can trust AI systems to behave as expected, safely, securely, and understandably.

4 Principles for Designing an Ethical AI System

The framework is built around four themes that are essential to AI systems:

  • Accountable to humans. AI systems must be built in ways to ensure that humans are always in ultimate control and responsible for all that the AI system will do. As Michael McQuade, member of the DIB and VP for research at Carnegie Mellon University, said, "Just because it has reasoning capability, it does not remove the responsibility from people... What is new about AI does not change human responsibility." This concept is particularly significant with regard to decisions that affect a person's life, quality of life, health, or reputation. All decisions and outcomes must remain the designated responsibility of humans to ensure that the decision is made carefully, as well as to maintain the role of AI systems in supporting humans.

    Depending on the system the diverse team is creating, this may not be an obvious piece of guidance, so this is precisely where the discussions must begin. For example, if the AI system is able to react much more quickly than a human--and that is a desirable feature for safety and other concerns--it becomes important to determine the operating limits of the system and who is responsible for it. If the system will prioritize potential outcomes, it is important to discuss how you might show outcomes that were not prioritized but are common. In this type of situation, it is also important to understand how the system will show outcomes that were not prioritized because they are rare (but might be appropriate for this situation). If the conditions change,how will the effect on potential outcomes be shown?

  • Cognizant of speculative risks and benefits. Risks to humans' personal information and decisions that affect their life, quality of life, health, or reputation must be anticipated as much as possible and then evaluated long before humans are using or affected by the AI system. While we cannot imagine all potential outcomes, making the effort to speculate is the only ethical option. As previously mentioned, a cross-functional and diverse team will uncover a broader set of issues than one whose members have primarily shared experiences. Including the people who will be using the system in this work will help to ensure that a broad set of scenarios is considered. The diverse team must make time to identify the full range of harmful and malicious uses of the AI system. This identification can be done in various ways, the ideal one being a workshop or a series of workshops where information about the system's use is shared and then various potential issues are raised by a diverse group.

  • Respectful and secure. To gain trust of humans, AI systems must be respectful and secure. This work starts with a team that values humanity, ethics, equity, fairness, accessibility, diversity, and inclusion, and transfers those values into their work. These are challenging concepts, especially in situations where we need to maintain advantages over adversaries who may not share these values we aspire to as a civil society. The diverse teams we bring together should include diversity in the people who make and curate the content for initial training to reduce unintended and/or unwanted bias, the people creating algorithms and training the AI system, and the people who will monitor and manage the production system.

    The meaning of diversity will vary between the different groups responsible for developing an AI system. For example, in some cases, the focus will be on educational differences or work experience, or other differentiators beyond the obvious gender, race, cultural and disability status, and more. This emphasis on diversity does not mean lowering the bar of experience and talent, but rather extending that bar so it is more inclusive of individuals that are not typically considered.

  • Honest and usable. An honest and usable system values transparency with the goal of engendering trust of everyone interacting with it. Achieving this goal includes explaining the AI system and its limitations in language that the audience understands. For example, a new user to the system should be able to ascertain, at least at a high level, what the AI system does and how it works and have access to more detailed explanations. The limitations should be provided in plain language that is easily understood. Achieving this goal can be hard in secure and classified environments, so in those situations it is essential to carefully balance the risks of information leakage with the benefits of trust.

    For example, a facial recognition system may be biased due to being trained on primarily white faces and may not recognize darker skin. Similarly, a voice-to-text system trained on American English may recognize neither accents from outside the U.S. nor other forms of English. By being forthright with admissions of known bias and weakness in the system, people can ascertain for themselves if it will be useful to them.

    A diverse team can reduce the chances that the team creates solutions that reflect their own biases, such as computer vision systems that only recognize white faces as "people." Just like the humans creating it, no AI system is perfect. It will have limitations, biases, and other imperfections and they should be shared clearly in communications to the analysts and warfighters interacting with the AI system.

Challenges to Designing Ethical AI Systems

A common challenge throughout this work is the danger of humanizing an AI system--my use of words such as respectful and honest can contribute to the humanization that I specifically want to avoid. For example, machine learning can be used to teach an AI system concepts, such as All humans must be treated equally and AI systems must respect people's privacy. Teams working to design and develop AI systems, must understand that the AI system can be taught certain qualities that mirror respect, but it can't be taught to actually respect a human. In cybersecurity situations where tactics that target and exploit individuals' privacy are used, this work should still be deliberated.

For example, consider the concept of justice. It is important for teams to understand that the AI system does not have the capability to understand the concept of justice, and that its understanding of justice be limited to a very methodical, computer-sensing approach. Devising an appropriate way for analysts and warfighters interacting with the AI system to see and understand these limitations--that the AI system does not possess the ability to fully exhibit respect or justice in its decision making--is important. Despite these limitations, the AI system should demonstrate behaviors indicative of respect and exhibit language and attributes to communicate respect.

The danger of attributing humanistic traits to an AI system is that engineers might approach its design and building with an inaccurate technical frame of mind. This danger applies to both engineers as they are designing and building the AI system, as well as to analysts, as they are interacting with the AI system and making assumptions about the system.

It is also important that AI systems are designed so that users can discern between an AI system and another human. Preserve that difference so that users can understand, Oh, I'm talking to the AI and Oh, I'm talking to Erica. Knowing and understanding that difference is important because there will be different expectations and different inferences and perhaps different information shared. Context may deprioritize the discernment over other needs, and again, in these situations, careful deliberation is important.

Looking Ahead

AI continues to evolve, and this framework is a first step toward helping teams deal with the complexity that is inherent in these systems. My work on the Human Machine Teaming Framework will continue to enable organizations to bring together diverse teams with clear expectations and mitigation plans for responding in constructive ways that protect people. The framework is available for download from the SEI Digital Library at https://resources.sei.cmu.edu/library/asset-view.cfm?assetid=636474.

Additional Resources

I presented the paper on which this blog post is based, Designing Trustworthy AI: A Human-Machine Teaming Framework to Guide Development, at the AAAI [The Association for the Advancement of Artificial Intelligence] Fall Symposium in November 2019.

With that paper an initial version of a checklist was developed, and an updated version of the checklist and agreement is available on the SEI website. The checklist and agreement are to be paired with a set of technical ethics to guide the development of ethical AI systems.

Read a commentary piece that I wrote for War on the Rocks, Creating a Curious, Ethical, and Diverse AI Workforce.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed