Category: Autonomy and Counter-Autonomy

Federal agencies and other organizations face an overwhelming security landscape. The arsenal available to these organizations for securing software includes static analysis tools, which search code for flaws, including those that could lead to software vulnerabilities. The sheer effort required by auditors and coders to triage the large number of potential code flaws typically identified by static analysis can hijack a software project's budget and schedule. Auditors need a tool to classify alerts and to prioritize some of them for manual analysis. As described in my first post in this series, I am leading a team on a research project in the SEI's CERT Division to use classification models to help analysts and coders prioritize which vulnerabilities to address. In this second post, I will detail our collaboration with three U.S. Department of Defense (DoD) organizations to field test our approach. Two of these organizations each conduct static analysis of approximately 100 million lines of code (MLOC) annually.

By Will Klieber
CERT Secure Coding Team

This blog post is co-authored by Will Snavely.

Finding violations of secure coding guidelines in source code is daunting, but fixing them is an even greater challenge. We are creating automated tools for source code transformation. Experience in examining software bugs reveals that many security-relevant bugs follow common patterns (which can be automatically detected) and that there are corresponding patterns for repair (which can be performed by automatic program transformation). For example, integer overflow in calculations related to array bounds or indices is almost always a bug. While static analysis tools can help, they typically produce an enormous number of warnings. Once an issue has been identified, teams are only able to eliminate a small percentage of the vulnerabilities identified. As a result, code bases often contain an unknown number of security bug vulnerabilities. This blog post describes our research in automated code repair, which can eliminate security vulnerabilities much faster than the existing manual process and at a much lower cost. While this research focuses to the C programming language, it applies to other languages as well.

The growth and change in the field of robotics in the last 15 years is tremendous, due in large part to improvements in sensors and computational power. These sensors give robots an awareness of their environment, including various conditions such as light, touch, navigation, location, distance, proximity, sound, temperature, and humidity. The increasing ability of robots to sense their environments makes them an invaluable resource in a growing number of situations, from underwater explorations to hospital and airport assistants to space walks. One challenge, however, is that uncertainty persists among users about what the robot senses; what it predicts about its state and the states of other objects and people in the environment; and what it believes its outcomes will be from the actions it takes. In this blog post, I describe research that aims to help robots explain their behaviors in plain English and offer greater insights into their decision making.

This post was co-authored by Sagar Chaki

In 2011, the U.S. Government maintained a fleet of approximately 8,000 unmanned aerial systems (UAS), commonly referred to as "drones," a number that continues to grow. "No weapon system has had a more profound impact on the United States' ability to provide persistence on the battlefield than the UAVs," according to a report from the 2012 Defense Science Board. Making sure government and privately owned drones share international air space safely and effectively is a top priority for government officials. Distributed Adaptive Real-Time (DART) systems are key to many areas of Department of Defense (DoD) capability, including the safe execution of autonomous, multi-UAS missions having civilian benefits. DART systems promise to revolutionize several such areas of mutual civilian-DoD interest, such as robotics, transportation, energy, and health care. To fully realize the potential of DART systems, however, the software controlling them must be engineered for high-assurance and certified to operate safely and effectively. In short, these systems must satisfy guaranteed and highly-critical safety requirements (e.g., collision avoidance) while adapting smartly to achieve application requirements, such as protection coverage, while operating in dynamic and uncertain environments. This blog post describes our architecture and approach to engineering high-assurance software for DART systems.