search menu icon-carat-right cmu-wordmark

Characterizing and Detecting Mismatch in Machine-Learning-Enabled Systems

Conference Paper
This paper reports findings from a study of mismatches in end-to-end development of machine-learning-enabled systems and implications for improving development practices.
Publisher

IEEE

Abstract

Increasing availability of machine learning (ML) frameworks and tools, as well as their promise to improve solutions to data-driven decision problems, has resulted in popularity of using ML techniques in software systems. However, end-to-end development of ML-enabled systems, as well as their seamless deployment and operations, remains a challenge. One reason is that development and deployment of ML-enabled systems involve three distinct workflows, perspectives, and roles, which include data science, software engineering, and operations. These three distinct perspectives, when misaligned due to incorrect assumptions, cause ML mismatches that can result in failed systems. We conducted an interview and survey study in which we collected and validated common types of mismatches that occur in end-to-end development of ML-enabled systems. Our analysis shows that how each role prioritizes the importance of relevant mismatches varies, potentially contributing to these mismatched assumptions. In addition, the mismatch categories we identified can be specified as machine readable descriptors contributing to improved ML-enabled system development. In this paper, we report our findings and their implications for improving end-to-end ML-enabled system development.

The introductory presentation and interview guide are both available in the replication package for this study.