SCALe Collection
• Collection
Publisher
Software Engineering Institute
Topic or Tag
Abstract
Experience shows that most software contains coding flaws that lead to vulnerabilities. Static analysis tools produce many alerts, some of which are false positives, that engineers painstakingly examine to find legitimate flaws. Researchers in the Software Engineering Institute’s (SEI’s) CERT Division have developed Source Code Analysis Laboratory (SCALe) to help analysts be more efficient and effective at auditing source code for security flaws.
What Is SCALe?
SCALe is a set of tools and processes developed by the SEI to help organizations address common problems in auditing code.
While every static analyzer can be used to look for source code flaws that might be exploitable, analyzers have different strengths. One critical issue is that no analyzer finds everything. Using multiple tools can increase coverage to find more security flaws, but many static analyzers provide their own interface for managing their own alerts, complicating attempts to use multiple analyzers on the same codebase. Further complicating this is that some static analysis checkers have high false-positive rates in some tools. A second critical issue is that all static analysis tools produce some false positives, unless the static analysis tool incorporates a formal prover (which is very rare and even when present, is only as good as the formally stated claims). These false positives lead to the need to adjudicate alerts as true or false. The traditional method to address this issue is manually adjudicating alerts, a process that requires expertise in the coding language, understanding the code flaw taxonomy, and tracing the code (for data flow, control flow, variable types, etc.) to make the adjudication; this is expensive and takes time.
SCALe—which has been used to analyze software for the Department of Defense (DoD), energy delivery systems, medical devices, and more—provides smart methods and tools to automate alert adjudication and related work. For example, we have developed tools that incorporate novel algorithms, DevSecOps integrations, and ways to use artificial intelligence (AI) (e.g., large language models [LLMs] and machine learning [ML]). To increase our tools’ impact, some are released cost free and open source. SCALe tools and processes, some of which are detailed below, help analysts and developers to efficiently find and fix code flaws, thereby reducing exploitable bugs and malfunction in code.
SCALe Auditing Framework
The SCALe auditing framework aggregates output from commercial, open source, and experimental analysis tools. It maps alerts about possible code flaws from code analysis tools to code flaw taxonomies of interest (e.g., CERT Secure Coding Rules and CWE). It provides a graphical interface that an analyst can use to filter, fuse, and prioritize alerts as well as examine code associated with an alert. The analyst can also mark alert adjudications (e.g., true or false) and store or export data for the audit project. Some static analysis tool output formats (including the SARIF standard format) are already integrated with the SCALe tools; the SCALe user manual explains the simple API enabling users to integrate new tools.
We provide the SCALe auditing framework tools to many DoD organizations and some non-DoD organizations for their use in evaluating their source code for adherence to secure coding standards. We provide services to help organizations adopt the SCALe auditing framework to improve their secure development lifecycle practices.
SCALe Research Prototype
We create SCALe research prototypes by adding new, experimental functionality to the SCALe auditing framework and processes. For example, a research project may use different rules for determining which alerts to audit or which alert determination lexicon to use. These prototypes may be distributed to collaborators during a project; we often integrate innovative technologies and processes from the prototypes into SCALe. For example, SEI’s SCAIFE research focused on novel use of AI for static analysis, enhancing and integrating with SCALe tools in a modular API-defined framework for continuous integration (CI) systems.
SCALe Conformance Testing
SCALe conformance testing provides organizations with an evaluation of their source code for adherence to secure coding standards. We use the SCALe auditing framework and commercial, open source, and experimental analysis tools to provide this service. For each CERT secure coding standard, the source code for the software is certified at a level of conformance.
The SCALe Conformance Process
Conformance testing motivates organizations to invest in developing conforming systems by testing code against CERT secure coding standards, verifying that the code conforms with those standards, using the CERT seal, and maintaining a certificate registry of conforming systems. When you request SCALe conformance testing, the following process is initiated:
- You submit your source code for analysis.
- CERT staff examines the code using analyzer tools.
- CERT staff validates and summarizes the results.
- You receive a detailed report of findings to guide your repair of the source code.
- You address the identified violations and resubmit the repaired code.
- CERT staff reassesses the code to ensure that you have mitigated all violations properly.
- Your certification for that version of the product is published in a registry of certified systems.
The CERT SCALe Seal
If CERT SCALe conformance testing determines that your software conforms to a secure coding standard, you may use the CERT SCALe seal. The seal must be specifically tied to the software passing conformance testing and not applied to untested products or the organization. Use of the CERT SCALe seal is contingent upon (1) the organization entering into a service agreement with Carnegie Mellon University and (2) the software being designated by the CERT Division as conforming. With some exceptions, modifications made to software after it is designated as conforming voids the conformance designation.
Related Research
At the SEI, we are conducting research on related topics, including adjudication assisted by AI (LLMs, ML, etc.) and automated program repair. Our research in AI, specifically alert classification and prioritization, is intended to help organizations secure their code more efficiently by using statistical methods to triage and prioritize static analysis alerts. Our research has demonstrated that LLM-assisted adjudication can provide step-by-step reasoning that a human can quickly follow to validate an automated adjudication. We have also used LLMs to provide examples that demonstrate a flaw’s existence. Our automated program repair research and development develops automated patches that can be used to eliminate the flaw during development and/or as part of security reviews, resulting in less static analysis alerts.
Collection Items

Improve Your Static Analysis Audits Using CERT SCALe’s New Features
• Webcast
By Lori Flynn
In this webcast, Lori Flynn, a CERT senior software security researcher, describes the new features in SCALe v3, a research prototype tool.
Watch
Improve Your Static Analysis Audits Using CERT SCALe’s New Features
• Presentation
By Lori Flynn
Learn how to become a research project collaborator for SCALe v3.
Learn More
Hands-On Tutorial: Auditing Static Analysis Alerts Using a Lexicon and Rules
• Presentation
By Lori Flynn, David Svoboda, William Snavely
In this tutorial, SEI researchers describe auditing rules and a lexicon that SEI developed.
Learn More
SCALe: Evaluating Source Code for Adherence to Secure Coding Standards
• Brochure
By Software Engineering Institute
SCALe help analysts be more efficient and effective at auditing source code for security flaws.
Learn More
Static Analysis Alert Audits: Lexicon & Rules
• Conference Paper
By David Svoboda, Lori Flynn, William Snavely
In this paper, the authors provide a suggested set of auditing rules and a lexicon for auditing static analysis alerts.
Read
Release of SCAIFE System Version 2.0.0 Provides Support for Continuous-Integration (CI) Systems
• Blog Post
By Lori Flynn
The Source Code Analysis Integrated Framework Environment (SCAIFE) system is a research prototype for a modular architecture. The architecture is designed to enable a wide variety of tools, systems, and …
Read
SCAIFE and ACR: Static Analysis Classification and Automated Code Repair
• Presentation
By Lori Flynn, William Klieber
Flynn and Klieber describe their research and concept for a combined system for static analysis classification and automated code repair.
Learn More
Managing Static Analysis Alerts with Efficient Instantiation of the SCAIFE API into Code and an Automatically Classifying System
• Blog Post
By Lori Flynn
Static analysis tools analyze code without executing it to identify potential flaws in source code. Since alerts may be false positives, engineers must painstakingly examine them to adjudicate if they …
Read
An Application Programming Interface for Classifying and Prioritizing Static Analysis Alerts
• Blog Post
By Lori Flynn Ebonie McNeil
In this post, we describe the Source Code Analysis Integrated Framework Environment (SCAIFE) application programming interface (API). SCAIFE is an architecture for classifying and prioritizing static analysis alerts. It is …
Read
Release of SCAIFE System Version 1.0.0 Provides Full GUI-Based Static-Analysis Adjudication System with Meta-Alert Classification
• Blog Post
By Lori Flynn
The SEI Source Code Analysis Integrated Framework Environment (SCAIFE) is a modular architecture designed to enable a wide variety of tools, systems, and users to use artificial intelligence (AI) classifiers …
Read
Integration of Automated Static Analysis Alert Classification and Prioritization with Auditing Tools: Special Focus on SCALe
• Technical Report
By Lori Flynn, Ebonie McNeil, David Svoboda, Derek Leung, Zachary Kurtz, Jiyeon Lee (Carnegie Mellon University)
This report summarizes progress and plans for developing a system to perform automated classification and advanced prioritization of static analysis alerts.
Read
SCALe v. 3: Automated Classification and Advanced Prioritization of Static Analysis Alerts
• Blog Post
By Lori Flynn Ebonie McNeil
Static analysis tools analyze code without executing it, to identify potential flaws in source code. These tools produce a large number of alerts with high false-positive rates that an engineer …
Read
SCALe Analysis of JasPer Codebase
• White Paper
By David Svoboda
In this paper, David Svoboda provides the findings of a SCALe audit on a codebase.
Read
Improving the Automated Detection and Analysis of Secure Coding Violations
• Technical Note
By Daniel Plakosh, Robert C. Seacord, Robert W. Stoddard, David Svoboda, David Zubrow
This technical note describes the accuracy analysis of the Source Code Analysis Laboratory (SCALe) tools and the characteristics of flagged coding violations.
Read
Source Code Analysis Laboratory (SCALe)
• Webcast
By Robert C. Seacord
In this webinar, Robert Seacord discusses SCALe, a demonstration that software systems can be tested for conformance to secure coding standards.
Watch
Supporting the Use of CERT Secure Coding Standards in DoD Acquisitions
• Technical Note
By Timothy Morrow, Robert C. Seacord, John K. Bergey, Philip Miller
In this report, the authors provide guidance for helping DoD acquisition programs address software security in acquisitions.
Read
Source Code Analysis Laboratory (SCALe)
• Technical Note
By Robert C. Seacord, Will Dormann, James McCurley, Philip Miller, Robert W. Stoddard, David Svoboda, Jefferson Welch
In this report, the authors describe the CERT Program's Source Code Analysis Laboratory (SCALe), a conformance test against secure coding standards.
Read
SEI CERT Coding Standards Wiki
• Handbook
By Software Engineering Institute
This wiki supports the development of coding standards for commonly used programming languages such as C, C++, Java, and Perl, and the Android™ platform.
Read
Source Code Analysis Laboratory (SCALe) for Energy Delivery Systems
• Technical Report
By Robert C. Seacord, Will Dormann, James McCurley, Philip Miller, Robert W. Stoddard, David Svoboda, Jefferson Welch
In this report, the authors describe the Source Code Analysis Laboratory (SCALe), which tests software for conformance to CERT secure coding standards.
Read