icon-carat-right menu search cmu-wordmark

An Introduction to the MLOps Tool Evaluation Rubric

Webcast
In this webcast, Violet Turri and Emily Newman discuss the challenges of finding the right tools to support Machine Learning Operations (MLOps) pipelines and introduce the MLOps Tool Evaluation Rubric.
Publisher

Software Engineering Institute

Watch

Abstract

Organizations looking to build and adopt artificial intelligence (AI)–enabled systems face the challenge of identifying the right capabilities and tools to support Machine Learning Operations (MLOps) pipelines. Navigating the wide range of available tools can be especially difficult for organizations new to AI or those that have not yet deployed systems at scale. This webcast introduces the MLOps Tool Evaluation Rubric, designed to help acquisition teams pinpoint organizational priorities for MLOps tooling, customize rubrics to evaluate those key capabilities, and ultimately select tools that will effectively support ML developers and systems throughout the entire lifecycle, from exploratory data analysis to model deployment and monitoring. This webcast will walk viewers through the rubric’s design and content, share lessons learned from applying the rubric in practice, and conclude with a brief demo.

What Attendees Will Learn:

  • How to identify and prioritize key capabilities for MLOps tooling within their organizations
  • How to customize and apply the MLOps Tool Evaluation Rubric to evaluate potential tools effectively
  • Best practices and lessons learned from real-world use of the rubric in AI projects

About the Speaker

Headshot of Violet Turri.

Violet Turri

Violet Turri is an assistant software developer in the SEI AI Division where she works on multiple machine-learning engineering projects with an emphasis on explainability, test and evaluation strategies, and computer vision. Turri holds a bachelor’s degree in computer science from Cornell University and has a research background in human-computer …

Read more