search menu icon-carat-right cmu-wordmark

Explainable Verification: Survey, Situations, and New Ideas

White Paper
This report focuses on potential changes in software development practice and research that would help tools used for formal methods explain their output, making software practitioners more likely to trust them.
Publisher

Software Engineering Institute

Abstract

Many software-intensive systems of current and future application domains require (or will require) approval from a certification authority before being deployed. Formal methods (FMs) verify properties of software by proving them mathematically rather than running tests of the software. FMs have the potential to help certification. But unfortunately, FMs are currently rarely used because software practitioners do not understand how FM tools work internally and do not understand the underlying theories that these tools depend on. Since it is hard to trust something that you do not understand and have not used before, software practitioners tend to not trust FM tools. If FM tools could explain their output, however, then software practitioners would be more likely to trust them and this could bring the benefits of (i) faster fielding (because of less testing), and (ii) improved safety (because of better coverage).

Therefore, this report focuses on potential changes in software development practice and the research needed for this to be realized. Specifically, it discusses explanations generally, it discusses the way explanations can be used in different types of verification, and it presents ideas for creating explanations for FMs.