icon-carat-right menu search cmu-wordmark

MLTE: System-Centric Test and Evaluation of Machine Learning Models

Software
The SEI’s Machine Learning Test and Evaluation is an open source process and tool to evaluate machine learning models from inception to deployment.
Publisher

Software Engineering Institute

Topic or Tag

Abstract

Our one-of-a-kind tool and process, Machine Learning Test and Evaluation (MLTE, pronounced “melt”), can be used to evaluate machine learning (ML) models from inception to deployment.

Designed for interdisciplinary cross-team coordination, the MLTE framework defines a testing process and offers a supporting tool to allow teams to more effectively negotiate model requirements, evaluate model functionality, and share test results with all stakeholders. Providing various artifacts to support the processes of developing as well as testing and evaluating ML models, the MLTE tool includes negotiation cards, quality attribute scenarios, a test catalog, and a reporting function.

Overall, the MLTE process and tool increase ML model production readiness.

Related Links