search menu icon-carat-right cmu-wordmark

Evaluating Trustworthiness of AI Systems

Webcast
In this webcast, SEI researchers discuss how to evaluate trustworthiness of AI systems given their dynamic nature and the challenges of managing ongoing responsibility for maintaining trustworthiness.
Publisher

Software Engineering Institute

Watch

Abstract

AI system trustworthiness is dependent on end users’ confidence in the system’s ability to augment their needs. This confidence is gained through evidence of the system’s capabilities. Trustworthy systems are designed with an understanding of the context of use and careful attention to end-user needs. In this webcast, SEI researchers discuss how to evaluate trustworthiness of AI systems given their dynamic nature and the challenges of managing ongoing responsibility for maintaining trustworthiness.

What attendees will learn:

  • Basic understanding of what makes AI systems trustworthy
  • How to evaluate system outputs and confidence
  • How to evaluate trustworthiness to end users (and affected people/communities)

About the Speaker

Headshot of Carol Smith

Carol J. Smith

Carol Smith is a senior research scientist in human-machine interaction in the SEI AI Division. In this role Smith contributes to research and development focused on improving user experiences (UX) and interactions with the nation’s AI systems, robotics, and other complex and emerging technologies. Smith’s research includes human-computer interaction (HCI), …

Read more
Carrie Gardner

Carrie Gardner

Carrie Gardner is a cybersecurity engineer at the Carnegie Mellon University Software Engineering Institute. Her primary work is with the National Insider Threat Center, where she works on best practices in insider threat risk management. Carrie is also a part-time adjunct faculty member at the University of Pittsburgh School of …

Read more