icon-carat-right menu search cmu-wordmark

Center for Calibrated Trust Measurement and Evaluation (CaTE)—Guidebook for the Development and TEVV of LAWS to Promote Trustworthiness

White Paper
This guidebook supports personnel in the development and testing of autonomous weapon systems that employ ML, focusing on system reliability and operator trust.
Publisher

Software Engineering Institute

DOI (Digital Object Identifier)
10.1184/R1/28701104

Abstract

This guidebook provides operational and developmental test and evaluation personnel with recommendations for effectively developing, testing, evaluating, verifying, and validating lethal autonomous weapons systems (LAWS) that incorporate machine learning models. Focusing specifically on system trustworthiness and operator trust, this resource was developed by the Center for Calibrated Trust Measurement and Evaluation (CaTE). The guidebook supports key objectives including establishing a trust assurance case, evaluating operator trust, providing evidence of system trustworthiness, establishing methods to create systems that merit operator confidence, and promoting responsible artificial intelligence (RAI) principles throughout the development lifecycle.