Poster - Characterizing and Detecting Mismatch in ML-Enabled Systems
Software Engineering Institute
Development, deployment, and operation of machine learning (ML) systems involve three perspectives that operate separately and often speak different languages. Consequently, opportunities exist for mismatch between the assumptions made by each perspective with respect to the elements of the ML-enabled system and the actual guarantees provided by each element. Our solution is to develop descriptors for elements of ML-enabled systems by eliciting examples of mismatch from practitioners, formalizing definitions of each mismatch in terms of data needed to support detection, and identifying potential for using this data for automation of mismatch detection.