Uncertainty Quantification in Machine Learning: Measuring Confidence in Predictions
• Podcast
Publisher
Software Engineering Institute
Topic or Tag
Listen
Watch
Abstract
Eric Heim, a senior machine learning research scientist at the Software Engineering Institute at Carnegie Mellon University, discusses the quantification of uncertainty in machine-learning (ML) systems. ML systems can make wrong predictions and give inaccurate estimates for the uncertainty of their predictions. It can be difficult to predict when their predictions will be wrong. Heim also discusses new techniques to quantify uncertainty, identify causes of uncertainty, and efficiently update ML models to reduce uncertainty in their predictions. The work of Heim and colleagues at the SEI Emerging Technology Center closes the gap between the scientific and mathematical advances from the ML research community and the practitioners who use the systems in real-life contexts, such as software engineers, software developers, data scientists, and system developers.
About the Speaker
Eric Heim
Dr. Eric Heim is a senior machine learning research scientist at the SEI’s Emerging Technology Center. Before arriving at SEI, Heim led a basic and applied machine learning research group at the Air Force Research Laboratory, Information Directorate. Heim earned a doctoral degree in computer science in 2015 from the …
Read more