search menu icon-carat-right cmu-wordmark

Using LLMs to Automate Static-Analysis Adjudication and Rationales

Article
This article discusses a model for using large language models (LLMs) to handle static analysis output.
Publisher

CrossTalk

Abstract

This is a pre-publication version of the article that has been accepted for publication in the August 2024 edition of “CrossTalk: The Journal of Defense Software Engineering.”

Software vulnerabilities are a serious concern for the Department of Defense (DoD). Software analysts use static analysis as a standard method to evaluate the source code, but the volume of findings is often too large to review in their entirety, causing the DoD to accept unknown risk. Large Language Models (LLMs) are a new technology that show promising initial results for automation of alert adjudication and rationales. This has the potential to enable more secure code, better measure risk, support mission effectiveness, and reduce DoD costs. This article discusses our model for using LLMs to handle static analysis output, initial tooling we developed and our experimental results, related work by others, and additional work needed. Beyond static-analysis alert adjudication, similar techniques can be used to create LLM-based tools for other code analysis tasks.