Knowing When You Don't Know: Quantifying and Reasoning about Uncertainty in Machine Learning Models
• Presentation
This project focuses on detecting model uncertainty and mitigating its effects on the quality of model inference.
Publisher
Software Engineering Institute
Topic or Tag
Abstract
This project aims to accomplish the following objectives:
- Develop new techniques, and utilize existing ones, to give ML models the ability to express when they are likely to be wrong without drastically increasing the computational burden, requiring significantly more training data, or sacrificing accuracy.
- Develop techniques to detect the cause of uncertainty, learning algorithms that allow ML models to be improved after the cause of uncertainty is determined, and methods for reasoning in the presence of uncertainty without explicit retraining.
- Incorporate uncertainty modeling and methods to increase certainty in the ML models of government organizations.
Our work seeks to realize three overarching benefits. First, ML models in DoD AI systems will be made more transparent, resulting in safer, more reliable use of AI in mission-critical applications. Second, ML models will be more quickly and efficiently updated to adapt to dynamic changes in operational deployment environments. Third, we will make adoption of AI possible for missions where AI is currently deemed too unreliable or opaque to be used.