Archive: 2012-01

As part of our mission to advance the practice of software engineering and cybersecurity through research and technology transition, our work focuses on ensuring the development and operation of software-reliant Department of Defense (DoD) systems with predictable and improved quality, schedule, and cost. To achieve this mission, the SEI conducts research and development (R&D) activities involving the DoD, federal agencies, industry, and academia. As we look back on 2012, this blog posting highlights our many R&D accomplishments.

Occasionally this blog will highlight different posts from the SEI blogosphere. Today's post by Paulo Merson, a senior member of the technical staff in the SEI's Research, Technology, and System Solutions Program, is from the SATURN Network blog. This post explores Merson's experience using Checkstyle and pre-commit hooks on Subversion to verify the conformance between code and architecture.

The majority of research in cyber security focuses on incident response or network defense, either trying to keep the bad guys out or facilitating the isolation and clean-up when a computer is compromised. It's hard to find a technology website that's not touting articles on fielding better firewalls, patching operating systems, updating anti-virus signatures, and a slew of other technologies to help detect or block malicious actors from getting on your network. What's missing from this picture is a proactive understanding of who the threats are and how they intend to use the cyber domain to get what they want. Our team of researchers--which included Andrew Mellinger, Melissa Ludwick, Jay McAllister, and Kate Ambrose Sereno--sought to help organizations bolster their cyber security posture by leveraging best practices in methodologies and technologies that provide a greater understanding of potential risks and threats in the cyber domain. This blog posting describes how we are approaching this challenge and what we have discovered thus far.

Sabotage of IT systems by employees (the so-called "inside threat") is a serious problem facing many companies today. Not only can data or computing systems be damaged, but outward-facing systems can be compromised to such an extent that customers cannot access an organization's resources or products. Previous blog postings on the topic of insider threat have discussed mitigation patterns, controls that help identify insiders at risk of committing cyber crime, and the protection of next-generation DoD enterprise systems against insider threats through the capture, validation, and application of enterprise architectural patterns. This blog post describes our latest research in determining the indicators that insiders might demonstrate prior to attacks.

It is widely recognized today that software architecture serves as the blueprint for both the system and the project developing it, defining the work assignments that must be performed by design and implementation teams. Architecture is the primary purveyor of system quality attributes that are hard to achieve without a unifying architecture; it's also the conceptual glue that holds every phase of projects together for their many stakeholders. Last month, we presented two posting in a seriesfrom a panel at SATURN 2012 titled "Reflections on 20 Years of Software Architecture" that discussed the increased awareness of architecture as a primary means for achieving desired quality attributes and advances in software architecture practice for distributed real-time embedded systems during the past two decades.

As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in information assurance and agile, the Team Software Process (TSP), CERT secure coding standards, resource allocation, fuzzing, cloud computing interoperability, and cloud computing at the tactical edge. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

Organizational improvement efforts should be driven by business needs, not by the content of improvement models. While improvement models, such as the Capability Maturity Model Integration (CMMI) or the Baldrige Criteria for Performance Excellence, provide excellent guidance and best practice standards, the way in which those models are implemented must be guided by the same drivers that influence any other business decision. Business drivers are the collection of people, information, and conditions that initiate and support activities that help an organization accomplish its mission.

In previous blog posts, I have written about applying similarity measures to malicious code to identify related files and reduce analysis expense. Another way to observe similarity in malicious code is to leverage analyst insights by identifying files that possess some property in common with a particular file of interest. One way to do this is by using YARA, an open-source project that helps researchers identify and classify malware. YARA has gained enormous popularity in recent years as a way for malware researchers and network defenders to communicate their knowledge about malicious files, from identifiers for specific families to signatures capturing common tools, techniques, and procedures (TTPs). This blog post provides guidelines for using YARA effectively, focusing on selection of objective criteria derived from malware, the type of criteria most useful in identifying related malware (including strings, resources, and functions), and guidelines for creating YARA signatures using these criteria.

By analyzing vulnerability reports for the C, C++, Perl, and Java programming languages, the CERT Secure Coding Team observed that a relatively small number of programming errors leads to most vulnerabilities. Our research focuses on identifying insecure coding practices and developing secure alternatives that software programmers can use to reduce or eliminate vulnerabilities before software is deployed. In a previous post, I described our work to identify vulnerabilities that informed the revision of the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) standard for the C programming language. The CERT Secure Coding Team has also been working on the CERT C Secure Coding Standard, which contains a set of rules and guidelines to help developers code securely. This posting describes our latest set of rules and recommendations, which aims to help developers avoid undefined and/or unexpected behavior in deployed code.

Last week, we presented the first posting in a series from a panel at SATURN 2012 titled "Reflections on 20 Years of Software Architecture." In her remarks on the panel summarizing the evolution of software architecture work at the SEI, Linda Northrop, director of the SEI's Research, Technology, and System Solutions (RTSS) Program, referred to the steady growth in system scale and complexity over the past two decades and the increased awareness of architecture as a primary means for achieving desired quality attributes, such as performance, reliability, evolvability, and security.

A search on the term "software architecture" on the web as it existed in 1992 yielded 88,700 results. In May, during a panel providing a 20-year retrospective on software architecture hosted at the SEI Architecture Technology User Network (SATURN) conference, moderator Rick Kazman noted that on the day of the panel discussion--May 9, 2012-- that same search yielded 2,380,000 results. This 30-fold increase stems from various factors, including the steady growth in system complexity, the increased awareness of the importance of software architecture on system quality attributes, and the quality and impact of efforts by the SEI and other groups conducting research and transition activities on software architecture. This blog posting--the first in a series--provides a lightly edited transcription of the presentation of the first panelist, Linda Northrop, director of the SEI's Research, Technology, & System Solutions (RTSS) Programat the SEI, who provided an overview of the evolution of software architecture work at the SEI during the past twenty years.

For more than 10 years, scientists, researchers, and engineers used the TeraGrid supercomputer network funded by the National Science Foundation (NSF) to conduct advanced computational science. The SEI has joined a partnership of 17 organizations and helped develop the successor to the TeraGrid called the Extreme Science and Engineering Discovery Environment (XSEDE). This posting, which is the first in a multi-part series, describes our work on XSEDE that allows researchers open access--directly from their desktops--to the suite of advanced computational tools and digital resources and services provided via XSEDE. This series is not so much concerned with supercomputers and supercomputing middleware, but rather with the nature of software engineering practice at the scale of the socio-technical ecosystem.

All software engineering and management practices are based on cultural and social assumptions. When adopting new practices, leaders often find mismatches between those assumptions and the realities within their organizations. The SEI has an analysis method called Readiness and Fit Analysis (RFA) that allows the profiling of a set of practices to understand their cultural assumptions and then to use the profile to support an organization in understanding its fit with the practices' cultural assumptions. RFA has been used for multiple technologies and sets of practices, most notably for adoption of CMMI practices.

Since 2001, researchers at the CERT Insider Threat Center have documented malicious insider activity by examining media reports and court transcripts and conducting interviews with the United States Secret Service, victims' organizations, and convicted felons. Among the more than 700 insider threat cases that we've documented, our analysis has identified more than 100 categories of weaknesses in systems, processes, people or technologies that allowed insider threats to occur. One aspect of our research has focused on identifying enterprise architecture patterns that protect organization systems from malicious insider threat.

Engineering the architecture for a large and complex system is a hard, lengthy, and complex undertaking. System architects must perform many tasks and use many techniques if they are to create a sufficient set of architectural models and related documents that are complete, consistent, correct, unambiguous, verifiable, usable, and useful to the architecture's many stakeholders. This blog posting, the second in a two-part series, takes a deeper dive into the Method Framework for Engineering System Architectures (MFESA), which is a situational process engineeringframework for developing system-specific methods to engineer system architectures.

As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in workforce competency and readiness, cyber forensics, exploratory research, acquisition, and software-reliant systems. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

This post is the third and final installment in a three-part series that explains how Nedbank, one of the largest banks in South Africa, is rolling out the SEI's Team Software Process (TSP) throughout its IT organization. In the first post of this series, I examined how Nedbank addressed issues of quality and productivity among its software engineering teams using TSP at the individual and team level. In the second post, I discussed how the SEI worked with Nedbank to address challenges with expanding and scaling the use of TSP at an organizational level. In this post, I first explore challenges common to many organizations seeking to improve performance and become more agile and conclude by demonstrating how SEI researchers addressed these challenges in the TSP rollout at Nedbank.

This post is the second installment in a three-part series that explains how Nedbank, one of the largest banks in South Africa, is rolling out the SEI's Team Software Process (TSP)--a disciplined and agile software process improvement method--throughout its IT organization. In the first postof this series, I examined how Nedbank addressed issues of quality and productivity among its software engineering teams using TSP at the individual and team level. In this post, I will discuss how the SEI worked with Nedbank to address challenges with expanding and scaling the use of TSP at an organizational level.

Major acquisition programs increasingly rely on software to provide substantial portions of system capabilities. All too often, however, software is not considered when the early, most constraining program decisions are made. SEI researchers have identified misalignments between software architecture and system acquisition strategies that lead to program restarts, cancellations, and failures to meet important missions or business goals. This blog posting--the second installment in a two-part series--builds on the discussions in part one by introducing several patterns of misalignment--known as anti-patterns--that we've identified in our research and discussing how these anti-patternsare helping us create a new method for aligning software architecture and system acquisition strategies to reduce project failure.

Engineering the architecture for a large and complex system is a hard, lengthy, and complex undertaking. System architects must perform many tasks and use many techniques if they are to create a sufficient set of architectural models and related documents that are complete, consistent, correct, unambiguous, verifiable, and both usable by and useful to the architecture's many stakeholders. This blog posting, the first in a two-part series, presents the Method Framework for Engineering System Architectures (MFESA), which is a situational process engineeringframework for developing system-specific methods to engineer system architectures. This posting provides a brief historical description of situational method engineering, explains why no single system architectural engineering method is adequate, and introduces MFESA by providing a top-level overview of its components, describing its applicability, and explaining how it simultaneously provides the benefits of standardization and flexibility.

Major acquisition programs increasingly rely on software to provide substantial portions of system capabilities. Not surprisingly, therefore, software issues are driving system cost and schedule overruns. All too often, however, software is not even a consideration when the early, most constraining program decisions are made. Through analysis of troubled programs, SEI researchers have identified misalignments between software architecture and system acquisition strategies that lead to program restarts, cancellations, and failures to meet important missions or business goals. To address these misalignments, the SEI is conducting new research on enabling organizations to reduce program failures by harmonizing their acquisition strategy with their software architecture.

As part of our research related to early acquisition lifecycle cost estimation for the Department of Defense (DoD), my colleagues in the SEI's Software Engineering Measurement & Analysis initiative and I began envisioning a potential solution that would rely heavily on expert judgment of future possible program execution scenarios. Previous to our work on cost estimation, many parametric cost models required domain expert input, but, in our opinion, they did not address alternative scenarios of execution that might occur from Milestone A onward.

While agile methods have become popular in commercial software development organizations, the engineering disciplines needed to apply agility to mission-critical, software-reliant systems are not as well defined or practiced. To help bridge this gap, the SEI recently hosted the Agile Research Forum. The event brought together researchers and practitioners from around the world to discuss when and how to best apply agile methods in mission-critical environments found in government and many industries. This blog posting, the fifth and final installment in a multi-part series highlighting research presented during the forum, summarizes a presentation I gave on the importance of applying agile methods to common operating platform environments (COPEs)that have become increasingly important for the Department of Defense (DoD).

While agile methods have become popular in commercial software development organizations, the engineering disciplines needed to apply agility to mission-critical, software-reliant systems are not as well defined or practiced. To help bridge this gap, the SEI recently hosted the Agile Research Forum. The event brought together researchers and practitioners from around the world to discuss when and how to best apply agile methods in mission-critical environments found in government and many industries.

While agile methods have become popular in commercial software development organizations, the engineering disciplines needed to apply agility to mission-critical, software-reliant systems are not as well defined or practiced. To help bridge this gap, the SEI recently hosted the Agile Research Forum. The event brought together researchers and practitioners from around the world to discuss when and how to best apply agile methods in mission-critical environments found in government and many industries. This blog posting, the third installment in a multi-part series highlighting research presented during the forum, summarizes a presentation made during the forum by Ipek Ozkaya, a senior researcher in the SEI's Research, Technology & System Solutions program, who discussed the use of agile architecture practices to manage strategic, intentional technical debt.

While agile methods have become popular in commercial software development organizations, the engineering disciplines needed to apply agility to mission-critical, software-reliant systems are not as well defined or practiced. To help bridge this gap, the SEI recently hosted the Agile Research Forum, which brought together researchers and practitioners from around the world to discuss when and how to best apply agile methods in mission-critical environments found in government and many industries. This blog posting, the second installment in a multi-part series, summarizes a presentation made during the forum by Mary Ann Lapham, a senior researcher in the SEI's Acquisition Support Program, who highlighted the importance of collaboration with end users, as well as among cross-functional teams, to facilitate the adoption of agile approaches into DoD acquisition programs.

While agile methods have become popular in commercial software development organizations, the engineering disciplines needed to apply agility to mission-critical software-reliant systems are not as well defined or practiced. To help bridge this gap, the SEI recently hosted the Agile Research Forum, which brought together researchers and practitioners from around the world to discuss when and how to best apply agile methods in the mission-critical environments found in government and many industries. This blog posting, the first in a multi-part series, highlights key ideas and issues associated with applying agile methods to address the challenges of complexity, exacting regulations, and schedule pressures that were presented during the forum.

As security specialists, we are often asked to audit software and provide expertise on secure coding practices. Our research and efforts have produced several coding standards specifically dealing with security in popular programming languages, such as C, Java, and C++. This posting describes our work on the CERT Perl Secure Coding Standard, which provides a core of well-documented and enforceable coding rules and recommendations for Perl, which is a popular scripting language.

Buffer overflows--an all too common problem that occurs when a program tries to store more data in a buffer, or temporary storage area, than it was intended to hold--can cause security vulnerabilities. In fact, buffer overflows led to the creation of the CERT program, starting with the infamous 1988 "Morris Worm" incident in which a buffer overflow allowed a worm entry into a large number of UNIX systems. For the past several years, the CERT Secure Coding team has contributed to a major revision of the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) standard for the C programming language. Our efforts have focused on introducing much-needed enhancements to C and its standard library to address security issues, such as buffer overflows.

By law, major defense acquisition programs are now required to prepare cost estimates earlier in the acquisition lifecycle, including pre-Milestone A, well before concrete technical information is available on the program being developed. Estimates are therefore often based on a desired capability--or even on an abstract concept--rather than a concrete technical solution plan to achieve the desired capability. Hence the role and modeling of assumptions becomes more challenging. This blog posting outlines a multi-year project on Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE) conducted by the SEI Software Engineering Measurement and Analysis (SEMA) team. QUELCE is a method for improving pre-Milestone A software cost estimates through research designed to improve judgment regarding uncertainty in key assumptions (which we term program change drivers), the relationships among the program change drivers, and their impact on cost.

The extent of software in Department of Defense (DoD) systems has increased by more than an order of magnitude every decade. This is not just because there are more systems with more software; a similar growth pattern has been exhibited within individual, long-lived military systems. In recognition of this growing software role, the Director of Defense Research and Engineering (DDR&E, now ASD(R&E)) requested the National Research Council (NRC) to undertake a study of defense software producibility, with the purpose of identifying the principal challenges and developing recommendations regarding both improvement to practice and priorities for research.

Happy Memorial Day. As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in architecture analysis, patterns for insider threat monitoring, source code analysis and insider threat security reference architecture. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

Our modern data infrastructure has become very effective at getting the information you need, when you need it. This infrastructure has become so effective that we rely on having instant access to information in many aspects of our lives. Unfortunately, there are still situations in which the data infrastructure cannot meet our needs due to various limitations at the tactical edge, which is a term used to describe hostile environments with limited resources, from war zones in Afghanistan to disaster relief in countries like Haiti and Japan. This blog post describes our ongoing research in the Advanced Mobile Systems initiative at the SEI on edge-enabled tactical systems to address problems at the tactical edge.

According to the 2011 CyberSecurity Watch Survey, approximately 21 percent of cyber crimes against organizations are committed by insiders. Of the 607 organizations participating in the survey, 46 percent stated that the damage caused by insiders was more significant than the damage caused by outsiders. Over the past 11 years, researchers at the CERT Insider Threat Center have documented incidents related to malicious insider activity. Their sources include media reports, the courts, the United States Secret Service, victim organizations, and interviews with convicted felons.

Common operating platform environments (COPEs) are reusable software infrastructures that incorporate open standards; define portable interfaces, interoperable protocols, and data models; offer complete design disclosure; and have a modular, loosely coupled, and well-articulated software architecture that provides applications and end users with many shared capabilities. COPEs can help reduce recurring engineering costs, as well as enable developers to build better and more powerful applications atop a COPE, rather than wrestling repeatedly with tedious and error-prone infrastructure concerns.

Mission-critical operations in the Department of Defense (DoD) increasingly depend on complex software-reliant systems-of-systems (abbreviated as "systems" below). These systems are characterized by a rapidly growing number of connected platforms, sensors, decision nodes, and people. While facing constrained budget, expanded threat, and engineering workforce challenges, the DoD is trying to obtain greater efficiency and productivity in defense spending needed to acquire and sustain these systems. This blog posting--the first in a three-part series--motivates the need for DoD common operating platform environmentsthat can help collapse today's stove-piped solutions to decrease costs, spur innovation, and increase acquisition and operational performance.

According to the 2011 CyberSecurity Watch Survey, approximately 21 percent of cyber crimes against organizations are committed by insiders. Of the 607 organizations participating in the survey, 46 percent stated that the damage caused by insiders was more significant than the damage caused by outsiders. Over the past 11 years, CERT Insider Threat researchers have collected incidents related to malicious activity by insiders obtained from a number of sources, including media reports, the courts, the United States Secret Service, victim organizations, and interviews with convicted felons.

Many modern software systems employ shared-memory multi- threading and are built using software components, such as libraries and frameworks. Software developers must carefully control the interactions between multiple threads as they execute within those components. To manage this complexity, developers use information hiding to treat components as "black boxes" with known interfaces that explicitly specify all necessary preconditions and postconditions of the design contract, while using an appropriate level of abstraction to hide unnecessary detail.

New acquisition guidelines from the Department of Defense (DoD) aimed at reducing system lifecycle time and effort are encouraging the adoption of Agile methods. There is a general lack, however, of practical guidance on how to employ Agile methods effectively for DoD acquisition programs. This blog posting describes our research on providing software and systems architects with a decision making framework for reducing integration risk with Agile methods, thereby reducing the time and resources needed for related work.

In his book Drive, Daniel Pink writes that knowledge workers want autonomy, purpose, and mastery in their work. A big problem with any change in processes is getting the people who do the work to change how they work. Too often, people are told what to do instead of being given the information, autonomy, and authority to analyze and adopt the new methods for themselves. This posting--the first in a two-part series--describes a case study that shows how Team Software Process (TSP) principles allowed developers at a large bank to address challenges, improve their productivity, and thrive in an agile environment.

In my preceding blog post, I promised to provide more examples highlighting the importance of software sustainmentin the US Department of Defense (DoD). My focus is on certain configurations of weapons systems that are no longer in production for the United States Air Force, but are expected to remain a key component of our defense capability for decades to come, and thus software upgrade cycles need to refresh capabilities every 18 to 24 months. Throughout this series on efficient and effective software sustainment, I will highlight examples from each branch of the military. This second blog post describes effective sustainment engineering efforts in the Air Force, using examples from across the service's Air Logistics Centers (ALCs).

As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in embedded systems, risk management, risk-based measurement and analysis, early lifecycle cost estimation, and techniques for detecting data anomalies. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

Many DoD computing systems--particularly cyber-physical systems--are subject to stringent size, weight, and power requirements. The quantity of sensor readings and functionalities is also increasing, and their associated processing must fulfill real-time requirements. This situation motivates the need for computers with greater processing capacity. For example, to fulfill the requirements of nano-sized unmanned aerial vehicles (UAVs), developers must choose a computer platform that offers significant processing capacity and use its processing resources to meet its needs for autonomous surveillance missions. This blog post discusses these issues and highlights our research that addresses them.

Our SEI blog has included thoughtful discussions about sustaining software, such as the two-part post "The Growing Importance of Sustaining Software for the DoD." Software sustainment is growing in importance as the lifetimes of hardware systems greatly exceed the normal lifetime of software systems they are partnered with, as well as when system functionality increasingly depends on software elements. This blog post--the first in a multi-part series--provides specific examples of the importance of software sustainment in the Department of Defense (DoD), where software upgrade cycles need to refresh capabilities every 18 to 24 months on weapon systems that have been out of production for many years, but are expected to maintain defense capability for decades.

The SEI has been actively engaged in defining and studying high maturity software engineering practices for several years. Levels 4 and 5 of the CMMI (Capability Maturity Model Integration) are considered high maturity and are predominantly characterized by quantitative improvement. This blog posting briefly discusses high maturity and highlights several recent works in the area of high maturity measurement and analysis, motivated in part by a recent comment on a Jan. 30 postasking about the latest research in this area. I've also included links where the published research can be accessed on the SEI website.

We use the SEI Blog to inform you about the latest work at the SEI, so this week I'm summarizing some video presentations recently posted to the SEI website from the SEI Technologies Forum. This virtual event held in late 2011 brought together participants from more than 50 countries to engage with SEI researchers on a sample of our latest work, including cloud computing, insider threat, Agile development, software architecture, security, measurement, process improvement, and acquisition dynamics. This post includes a description of all the video presentations from the first event, along with links where you can view the full presentations on the SEI website.

Over the past several years, the SEI has explored the use of Agile methods in DoD environments, focusing on both if and when they are suitable and how to use them most effectively when they are suitable. Our research has approached the topic of Agile methods both from an acquisition and a technical perspective. Stephany Bellomo described some of our experiences in previous blog posts What is Agile? and Building a Foundation for Agile. This post summarizes a project the SEI has undertaken to review and study Agile approaches, with the goal of developing guidance for their effective application in DoD environments.

As part of an ongoing effort to keep you informed about our latest work, I'd like to let you know about some recently published SEI technical reports and notes. These reports highlight the latest work of SEI technologists in insider threat, interoperability, service-oriented architecture, operational resilience, and automated remediation. This post includes a listing of each report, author(s), and links where the published reports can be accessed on the SEI website.

Managing technical debt, which refers to the rework and degraded quality resulting from overly hasty delivery of software capabilities to users, is an increasingly critical aspect of producing cost-effective, timely, and high-quality software products. A delicate balance is needed between the desire to release new software capabilities rapidly to satisfy users and the desire to practice sound software engineering that reduces rework.

In our work with acquisitionprograms, we've often observed a major problem: requirements specifications that are incomplete, with many functional requirements missing. Whereas requirements specifications typically specify normal system behavior, they are often woefully incomplete when it comes to off-nominal behavior, which deals with abnormal events and situations the system must detect and how the system must react when it detects that these events have occurred or situations exist. Thus, although requirements typically specify how the system must behave under normal conditions, they often do not adequately specify how the system must behave if it cannot or should not behave as normally expected. This blog post examines requirements engineering for off-nominal behavior.