Category: Measurement & Analysis

This post was co-authored by Bill Nichols.


Mitre's Top 25 Most Dangerous Software Errors is a list that details quality problems, as well as security problems. This list aims to help software developers "prevent the kinds of vulnerabilities that plague the software industry, by identifying and avoiding all-too-common mistakes that occur before software is even shipped." These vulnerabilities often result in software that does not function as intended, presenting an opportunity for attackers to compromise a system.

As part of our research related to early acquisition lifecycle cost estimation for the Department of Defense (DoD), my colleagues in the SEI's Software Engineering Measurement & Analysis initiative and I began envisioning a potential solution that would rely heavily on expert judgment of future possible program execution scenarios. Previous to our work on cost estimation, many parametric cost models required domain expert input, but, in our opinion, they did not address alternative scenarios of execution that might occur from Milestone A onward.

By law, major defense acquisition programs are now required to prepare cost estimates earlier in the acquisition lifecycle, including pre-Milestone A, well before concrete technical information is available on the program being developed. Estimates are therefore often based on a desired capability--or even on an abstract concept--rather than a concrete technical solution plan to achieve the desired capability. Hence the role and modeling of assumptions becomes more challenging. This blog posting outlines a multi-year project on Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE) conducted by the SEI Software Engineering Measurement and Analysis (SEMA) team. QUELCE is a method for improving pre-Milestone A software cost estimates through research designed to improve judgment regarding uncertainty in key assumptions (which we term program change drivers), the relationships among the program change drivers, and their impact on cost.

The SEI has been actively engaged in defining and studying high maturity software engineering practices for several years. Levels 4 and 5 of the CMMI (Capability Maturity Model Integration) are considered high maturity and are predominantly characterized by quantitative improvement. This blog posting briefly discusses high maturity and highlights several recent works in the area of high maturity measurement and analysis, motivated in part by a recent comment on a Jan. 30 postasking about the latest research in this area. I've also included links where the published research can be accessed on the SEI website.

As with any new initiative or tool requiring significant investment, the business value of statistically-based predictive models must be demonstrated before they will see widespread adoption. The SEI Software Engineering Measurement and Analysis (SEMA)initiative has been leading research to better understand how existing analytical and statistical methods can be used successfully and how to determine the value of these methods once they have been applied to the engineering of large-scale software-reliant systems.

Organizations run on data. They use it to manage programs, select products to fund or develop, make decisions, and guide improvement. Data comes in many forms, both structured (tables of numbers and text) and unstructured (emails, images, sound, etc.). Data are generally considered high quality if they are fit for their intended uses in operations, decision making, and planning. This definition implies that data quality is both a subjective perception of individuals involved with the data, as well as the quality associated with the objective measurements based on the data set in question. This post describes the work we're doing with the Office of Acquisition, Technology and Logistics (AT&L)--a division of the Department of Defense (DoD) that oversees acquisition programs and is charged with, among other things, ensuring that the data reported to Congress is reliable.

The Government Accountability Office (GAO) has frequently cited poor cost estimation as one of the reasons for cost overrun problems in acquisition programs. Software is often a major culprit. One study on cost estimation by the Naval Postgraduate School found a 34 percent median value increase of software size over the estimate. Cost overruns lead to painful Congressional scrutiny, and an overrun in one program often cascades and leads to the depletion of funds from others. The challenges encountered in estimating software cost were described in the first postof this two-part series on improving the accuracy of early cost estimates. This post describes new tools and methods we are developing at the SEI to help cost estimation experts get the right information they need into a familiar and usable form for producing high quality cost estimates early in the life cycle.

The Government Accountability Office (GAO) has frequently citedpoor cost estimation as one of the reasons for cost overrun problems in acquisition programs. Software is often a major culprit. One study on cost estimation by the Naval Postgraduate School found a 34 percent median value increase of software size over the estimate. Cost overruns lead to painful Congressional scrutiny, and an overrun in one program often leads to the depletion of funds from another. This post, the first in a series on improving the accuracy of early cost estimates, describes challenges we have observed trying to accurately estimate software effort and cost in Department of Defense (DOD) acquisition programs, as well as other product development organizations.