search menu icon-carat-right cmu-wordmark

A New Approach for Developing Cost Estimates in Software Reliant Systems, Second in a Two-Part Series

Robert Ferguson

The Government Accountability Office (GAO) has frequently cited poor cost estimation as one of the reasons for cost overrun problems in acquisition programs. Software is often a major culprit. One study on cost estimation by the Naval Postgraduate School found a 34 percent median value increase of software size over the estimate. Cost overruns lead to painful Congressional scrutiny, and an overrun in one program often cascades and leads to the depletion of funds from others. The challenges encountered in estimating software cost were described in the first postof this two-part series on improving the accuracy of early cost estimates. This post describes new tools and methods we are developing at the SEI to help cost estimation experts get the right information they need into a familiar and usable form for producing high quality cost estimates early in the life cycle.

To help overcome the fact that the data available early in a program lifecycle does not correspond to the input data required for most cost estimation models, our method performs the following steps

  1. Identify program execution change drivers (referred to simply as "drivers") that are specific to the program
  2. Identify an efficient set of scenarios representing combinations of the driver states
  3. Develop a probability model (e.g., a Bayesian Belief Network (BBN)) depicting the cascading nature of the drivers
  4. Supplement traditional use of analogy with the BBN to predict the uncertainty of inputs to traditional cost models for each scenario
  5. Use Monte Carlo simulation to compute a scenario cost estimate based on the uncertain inputs from the previous step
  6. Use Monte Carlo simulation to consolidate the set of scenario cost estimates into a single, final cost estimate for the program

The remainder of this post describes each step in more detail.

In Step 1 we facilitate a short workshop with various program domain experts to identify how cost is affected by possible drivers, such as changes in program sponsorship or changes in supplier relationships. Workshop participants first select a "nominal" state of each driver, along with one or more possible alternate states. Notionally, the alternate states represent future conditions of each driver that will likely impact the program cost. We selected the Navy-AF Probability-Of-Program-Success (POPS) criteria as a straw-man to kick-start this workshop and expedite the discussion of possible drivers for their program. POPS contains seventeen categories, each with multiple decision criteria, which are mostly related to program management. We extended the straw-man with several other technical drivers, such as capability based analysis, capability definition, and systems design. After the drivers and driver states are fully identified, workshop participants subjectively evaluate the probability that each driver state will occur in the future. To avoid the common pitfalls of eliciting expert judgment of probabilities, we leverage recent published work on the calibration of expert judgment based on the book "How to Measure Anything" by Douglas Hubbard.

Step 2 is based upon techniques for scenario planning. As Lindgren and Bandhold describe in their book, Scenario Planning - Revised and Updated Edition: The Link Between Future and Strategy, scenario planning has been extensively studied as a means to analyze and manage uncertainty in product development and support strategic planning. In our use of scenario planning, a scenario consists of the combination of one or more drivers, each in a specific state. A nominal scenario may therefore be cast as all of the drivers set to their nominal states. A separate scenario may be cast as a small subset of the drivers, each set to one of their alternate states. The combinations of drivers and their states can clearly explode as a combinatorial problem. We employ a driver relationship matrix to subjectively ascertain the most likely cascading situations among the drivers, along with optional use of an orthogonal array (which is a statistical technique that allows a specific sample of scenarios to be evaluated thereby producing a model to explain all remaining scenarios) to produce a representative and efficient set of scenarios to guide cost estimation.

In Step 3 we construct a Bayesian Belief Network (BBN), which is a probabilistic model that dynamically represents the drivers and their relationships as envisioned by the program domain experts. Although initially populated with quantified relationships from the previous driver relationship matrix, this model may be further refined through analysis of quantitative data of the drivers from historical program completions. Such refinement produces BBN-modeled drivers that go beyond simple binary states of nominal versus non-nominal, to drivers modeled with all of their detailed alternate states and the quantified relationships between the alternate states of different drivers. This latter approach provides much greater modeled information and more accurate cost estimates compared with traditional statistical regression approaches that only cover a small fraction of the driver state combinations arbitrarily decided in advance. With BBN's, the driver states may be flexibly modeled to produce cost estimates, irrespective of which driver state combinations and data are available.

In Step 4 the experts examine each scenario and apply their knowledge of past programs to select relevant program and/or component analogies and associated Cost Estimation Relationships (CERs), which are empirical formulas predicting cost based on domain-specific attributes researched over decades of DoD program data. An example of a CER would be an Air Force CER which predicts cost and labor hours of aircraft development using factors such as aircraft quantity, maximum speed of the aircraft and aircraft weight. Estimators may identify as many as two dozen CERs for use in the different components and subsystems for a given scenario. After identifying the analogies and associated CERs, the workshop participants then use the BBN to compute uncertainty distributions for the input factors to the cost estimation model CER's for each scenario. The benefit of this approach is that the explicit knowledge of uncertainty of the CER input factors is overtly documented, rather than guessing a single value and trying to discern the final cost estimate answer.

Step 5 uses traditional cost models in a Monte Carlo simulation, in which hundreds of thousands of "what-if" hypotheses are calculated using the various uncertain input factor values to estimate cost for a given scenario. As a result, each scenario will then have a computed cost estimate in the form of a distribution. This approach provides a confident range of behavior expected for the cost estimate, rather than a single "guestimate" point value. The immediate benefit is a cost estimate with defined upside and downside uncertainty.

Lastly, in Step 6, the set of scenarios and their corresponding cost estimate distributions are consolidated into a single cost estimate distribution using another Monte Carlo simulation. From this distribution, statements of estimated cost at different confidence levels may then be derived.

Our research thus far has focused on evaluating various steps of the method for practicality and effectiveness. Through an industry pilot of Steps 1 and 2, we examined the typical rich set of possible drivers and their states, along with the combinatorial explosion of the possible scenarios. This pilot enabled us to refine our use of the driver relationship matrix and statistical orthogonal arrays to control the explosion. A subsequent workshop with representatives of the SEI Acquisition Support Program (ASP) enabled us to explore a possible common set of program execution drivers applicable to DoD programs, in addition to evaluating Steps 1-3. This workshop also enabled us to build an initial BBN model that will be applied in future DoD pilots and become an impetus for more detailed DoD characterization and data collection of program execution drivers.

We've made consistent progress at each stage of the research thus far and reactions from participants have been positive. Our goal of cost estimate "transparency" mentioned in the first post is certainly being realized. Recent comments from service cost center staff confirm that the detailed discussion of program execution change drivers and scenarios provides far greater insight to the cost estimate and would significantly assist those who need to review the cost estimates. Further work must be done to evaluate the accuracy of the resulting estimates and impact of the estimating process. We'll keep you posted.

Additional Resources:
For more information about the Software Engineering Measurement & Analysis Initiative, please visit


Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed