Archive: 2017-01

As part of an ongoing effort to keep you informed about our latest work, this blog post summarizes some recently published books, SEI technical reports, podcasts and webinars on insider threat, using malware analysis to identify overlooked security requirements, software architecture, scaling Agile methods, best practices for preventing and responding to DDoS attacks, and a special report documenting the technical history of the SEI.

These publications highlight the latest work of SEI technologists in these areas. This post includes a listing of each publication, author(s), and links where they can be accessed on the SEI website.

Federal agencies and other organizations face an overwhelming security landscape. The arsenal available to these organizations for securing software includes static analysis tools, which search code for flaws, including those that could lead to software vulnerabilities. The sheer effort required by auditors and coders to triage the large number of potential code flaws typically identified by static analysis can hijack a software project's budget and schedule. Auditors need a tool to classify alerts and to prioritize some of them for manual analysis. As described in my first post in this series, I am leading a team on a research project in the SEI's CERT Division to use classification models to help analysts and coders prioritize which vulnerabilities to address. In this second post, I will detail our collaboration with three U.S. Department of Defense (DoD) organizations to field test our approach. Two of these organizations each conduct static analysis of approximately 100 million lines of code (MLOC) annually.

By Will Klieber
CERT Secure Coding Team

This blog post is co-authored by Will Snavely.

Finding violations of secure coding guidelines in source code is daunting, but fixing them is an even greater challenge. We are creating automated tools for source code transformation. Experience in examining software bugs reveals that many security-relevant bugs follow common patterns (which can be automatically detected) and that there are corresponding patterns for repair (which can be performed by automatic program transformation). For example, integer overflow in calculations related to array bounds or indices is almost always a bug. While static analysis tools can help, they typically produce an enormous number of warnings. Once an issue has been identified, teams are only able to eliminate a small percentage of the vulnerabilities identified. As a result, code bases often contain an unknown number of security bug vulnerabilities. This blog post describes our research in automated code repair, which can eliminate security vulnerabilities much faster than the existing manual process and at a much lower cost. While this research focuses to the C programming language, it applies to other languages as well.

Many system and software developers and testers, especially those who have primarily worked in business information systems, assume that systems--even buggy systems--behave in a deterministic manner. In other words, they assume that a system or software application will always behave in exactly the same way when given identical inputs under identical conditions. This assumption, however, is not always true. While this assumption is most often false when dealing with cyber-physical systems, new and even older technologies have brought various sources of non-determinism, and this has significant ramifications on testing. This blog post, the first in a series, explores the challenges of testing in a non-deterministic world.