icon-carat-right menu search cmu-wordmark

Using Role-Playing Scenarios to Identify Bias in LLMs

Podcast
Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Katie Robinson and Violet Turri discuss their recent work using role-playing game scenarios to identify biases in LLMs.
Publisher

Software Engineering Institute

DOI (Digital Object Identifier)
10.58012/y68p-cr82

Listen

Watch

Abstract

Harmful biases in large language models (LLMs) make AI less trustworthy and secure. Auditing for biases can help identify potential solutions and develop better guardrails to make AI safer. In this podcast, Katie Robinson and Violet Turri, researchers in the SEI’s AI Division, discuss their recent work using role-playing game scenarios to identify biases in LLMs.

About the Speaker

Katherine-Marie Robinson

Katherine-Marie Robinson

Katherine-Marie Robinson is an assistant design researcher in the SEI’s AI Division. Since joining the SEI in September 2022, Robinson has worked on a wide variety of projects where she aims to bring a responsible AI (RAI) lens to the work at hand including researching and developing tools, curriculums, and …

Read more
Headshot of Violet Turri

Violet Turri

Violet Turri is an assistant software developer in the SEI AI Division where she works on multiple machine-learning engineering projects with an emphasis on explainability, test and evaluation strategies, and computer vision. Turri holds a bachelor’s degree in computer science from Cornell University and has a research background in human-computer …

Read more