icon-carat-right menu search cmu-wordmark

Evaluating Trustworthiness of AI Systems

Webcast
In this webcast, SEI researchers discuss how to evaluate trustworthiness of AI systems given their dynamic nature and the challenges of managing ongoing responsibility for maintaining trustworthiness.
Publisher

Software Engineering Institute

Watch

Abstract

AI system trustworthiness is dependent on end users’ confidence in the system’s ability to augment their needs. This confidence is gained through evidence of the system’s capabilities. Trustworthy systems are designed with an understanding of the context of use and careful attention to end-user needs. In this webcast, SEI researchers discuss how to evaluate trustworthiness of AI systems given their dynamic nature and the challenges of managing ongoing responsibility for maintaining trustworthiness.

What attendees will learn:

  • Basic understanding of what makes AI systems trustworthy
  • How to evaluate system outputs and confidence
  • How to evaluate trustworthiness to end users (and affected people/communities)

About the Speaker

Headshot of Carol Smith.

Carol J. Smith

Carol Smith is a senior research scientist in human-machine interaction in the SEI AI Division. In this role Smith contributes to research and development focused on improving user experiences (UX) and interactions with the nation’s AI systems, robotics, and other complex and emerging technologies. Smith’s research includes human-computer interaction (HCI), …

Read more
Headshot of Carrie Gardner

Carrie Gardner

Carrie Gardner is an SEI alumni employee.

Carrie Gardner is the Technical Manager for the AI for Mission Center at the AI Division within the Carnegie Mellon University Software Engineering Institute. Carrie leads the portfolio responsible for translating the state of possible with AI engineering into the state of practice. …

Read more