Measuring the Trustworthiness of AI Systems
Software Engineering Institute
DOI (Digital Object Identifier)10.58012/pb35-9p06
The ability of artificial intelligence (AI) to partner with the software engineer, doctor, or warfighter depends on whether these end users trust the AI system to partner effectively with them and deliver the outcome promised. To build appropriate levels of trust, expectations must be managed for what AI can realistically deliver. In this podcast from the SEI’s AI Division, Carol Smith, a senior research scientist specializing in human-machine interaction, joins design researchers Katie Robinson and Alex Steiner, to discuss how to measure the trustworthiness of an AI system as well as questions that organizations should ask before determining if it wants to employ a new AI technology.
About the Speaker
Katherine-Marie Robinson is an assistant design researcher in the SEI’s AI Division. Since joining the SEI in September 2022, Robinson has worked on a wide variety of projects where she aims to bring a responsible AI (RAI) lens to the work at hand including researching and developing tools, curriculums, and …Read more
Carol Smith is a senior research scientist in human-machine interaction in the SEI AI Division. In this role Smith contributes to research and development focused on improving user experiences (UX) and interactions with the nation’s AI systems, robotics, and other complex and emerging technologies. Smith’s research includes human-computer interaction (HCI), …Read more
Alexandrea Steiner is an assistant design researcher in the Artificial Intelligence (AI) Division at the SEI. She began her SEI career as a multimedia designer on the Communication Design team. Her role as a design researcher focuses on collaborating with others to understand challenges and design solutions that meet research …Read more