In our work with the Department of Defense (DoD) and other government agencies such as the U.S. Department of Veteran Affairs and the U.S. Department of the Treasury, we often encounter organizations that have been asked by their government program office to adopt agile methods. These are organizations that have traditionally utilized a "waterfall" life cycle model (as epitomized by the engineering "V" charts) and are accustomed to being managed via a series of document-centric technical reviews that focus on the evolution of the artifacts that describe the requirements and design of the system rather than its evolving implementation, as is more common with agile methods.
Software is the principal, enabling means for delivering system and warfighter performance across a spectrum of Department of Defense (DoD) capabilities. These capabilities span the spectrum of mission-essential business systems to mission-critical command, control, communications, computers, intelligence, surveillance, and reconnaissance (C4ISR) systems to complex weapon systems. Many of these systems now operate interdependently in a complex net-centric and cyber environment. The pace of technological change continues to evolve along with the almost total system reliance on software. This blog posting examines the various challenges that the DoD faces in implementing software assurance and suggests strategies for an enterprise-wide approach.
From the braking system in your automobile to the software that controls the aircraft that you fly in, safety-critical systems are ubiquitous. Showing that such systems meet their safety requirements has become a critical area of work for software and systems engineers. "We live in a world in which our safety depends on software-intensive systems," editors of IEEE Software wrote in the magazine's May/June issue. "Organizations everywhere are struggling to find cost-effective methods to deal with the enormous increase in size and complexity of these systems, while simultaneously respecting the need to ensure their safety." The Carnegie Mellon Software Engineering Institute (SEI) is addressing this issue with a significant research program into assurance cases. Our sponsors are regularly faced with assuring that complex software-based systems meet certain kinds of requirements such as safety, security, and reliability. In this post, the first in a series on assurance cases and confidence, I will introduce the concept of assurance cases and show how they can be used to argue that a safety requirement (or other requirement such as security) has been met.
Researchers on the CERT Division's insider threat team have presented several of the 26 patterns identified by analyzing our insider threat database, which is based on examinations of more than 700 insider threat cases and interviews with the United States Secret Service, victims' organizations, and convicted felons. Through our analysis, we identified more than 100 categories of weaknesses in systems, processes, people, or technologies that allowed insider threats to occur. One aspect of our research focuses on identifying enterprise architecture patterns that organizations can use to protect their systems from malicious insider threat. Now that we've developed 26 patterns, our next priority is to assemble these patterns into a pattern language that organizations can use to bolster their resources and make them more resilient against insider threats. This blog post is the third installment in a seriesthat describes our research to create and validate an insider threat mitigation pattern language to help organizations balance the cost of security controls with the risk of insider compromise.
In 2012, Symantec blocked more than 5.5 billion malware attacks (an 81 percent increase over 2010) and reported a 41 percent increase in new variants of malware, according to January 2013 Computer World article. To prevent detection and delay analysis, malware authors often obfuscate their malicious programs with anti-analysis measures. Obfuscated binary code prevents analysts from developing timely, actionable insights by increasing code complexity and reducing the effectiveness of existing tools. This blog post describes research we are conducting at the SEI to improve manual and automated analysis of common code obfuscation techniquesused in malware.
When life- and safety-critical systems fail, the results can be dire, including loss of property and life. These types of systems are increasingly prevalent, and can be found in the altitude and control systems of a satellite, the software-reliant systems of a car (such as its cruise control and GPS), or a medical device. When developing such systems, software and systems architects must balance the need for stability and safety with stakeholder demands and time-to-market constraints. The Architectural Analysis & Design Language (AADL) helps software and system architects address the challenges of designing life- and safety-critical systems by providing a modeling notation that employs textual and graphic representations. This blog posting, part of an ongoing series on AADL, describes how AADL is being used in medical devices and highlights the experiences of a practitioner whose research aims to address problems with medical infusion pumps.
Soldiers and emergency workers who carry smartphones in the battlefield, or into disaster recovery sites (such as Boston following the marathon bombing earlier this year) often encounter environments characterized by high mobility, rapidly-changing mission requirements, limited computing resources, high levels of stress, and limited network connectivity. At the SEI, we refer to these situations as "edge environments." Along with my colleagues in the SEI's Advanced Mobile Systems Initiative, my research aims to increase the computing power of mobile devices in edge environments where resources are scarce. One area of my work has focused on leveraging cloud computing so users can extend the capabilities of their mobile devices by offloading expensive computations to more powerful computing resources in a cloud. Some drawbacks to offloading computation to the cloud in resource-constrained environments remain, however, including latency (which can be exacerbated by the distance between mobile devices and clouds) and limited internet access (which makes traditional cloud computing unfeasible). This blog post is the latest in a seriesthat describes research aimed at exploring the applicability of application virtualization as a strategy for cyber-foraging in resource-constrained environments.