search menu icon-carat-right cmu-wordmark

Why Does Software Cost So Much?

Robert Stoddard

Cost estimation was cited by the Government Accountability Office (GAO) as one of the top two reasons why DoD programs continue to have cost overruns. How can we better estimate and manage the cost of systems that are increasingly software intensive? To contain costs, it is essential to understand the factors that drive costs and which ones can be controlled. Although we understand the relationships between certain factors, we do not yet separate the causal influences from non-causal statistical correlations. In this blog post, we explore how the use of an approach known as causal learning can help the DoD identify factors that actually cause software costs to soar and therefore provide more reliable guidance as to how to intervene to better control costs.

As the role of software in the DoD continues to increase so does the need to control the cost of software development and sustainment. Consider the following trends cited in a March 2017 report from the Institute for Defense Analysis:

  • The National Research Council (2010) wrote that "The extent of the DoD code in service has been increasing by more than an order of magnitude every decade, and a similar growth pattern has been exhibited within individual, long-lived military systems."

  • The Army (2011) estimated that the volume of code under Army depot maintenance (either post-deployment or post-production support) had increased from 5 million to 240 million SLOC between 1980 and 2009. This trend corresponds to approximately 15 percent annual growth.

  • A December 2017 Selected Acquisition Report (SAR) showed cost growth in large-scale DoD programs is common, with a $91 billion cost growth to-date (engineering and estimating) in the DoD portfolio. Poor cost estimation, including early lifecycle estimates, represents almost $8 billion of the $91 billion.

The SEI has a long track record of cost-related research to help the DoD manage costs. In 2012, we introduced Quantifying Uncertainty in Early Lifecycle Cost Estimation (QUELCE) as a method for improving pre-Milestone-A software cost estimates through research designed to improve judgment regarding uncertainty in key assumptions (which we term "program change drivers"), the relationships among the program change drivers, and their impact on cost.

QUELCE synthesized scenario planning workshops, Bayesian Belief Network (BBN) modeling, and Monte Carlo simulation into an estimation method that quantifies uncertainties, allows subjective inputs, visually depicts influential relationships among program change drivers and outputs, and assists with the explicit description and documentation underlying an estimate. The outputs of QUELCE necessarily then feed the inputs to existing cost estimation machinery.

While QUELCE focused on early-lifecycle cost estimation (which is highly dependent on subjective expert judgment) to create a connected probabilistic model of change drivers, we realized that major portions of the model could be derived from causal learning of historical DoD program experiences. We found that several existing cost-estimation tools could go beyond traditional statistical correlation and regression and benefit from causal learning as well.

The focus on cost estimation alone does not require causal knowledge, as it is focused on prediction rather than intervention. When prediction is the goal, statistical and maximum likelihood (ML) estimation methods are suitable. If the goal, however, is to intervene to deliberately cause a change via policy or management action, causal learning is more suitable and offers benefits beyond statistical and machine learning methods.

The problem is that statistical correlation and regression alone do not reveal causation because correlation is only a noisy indicator of a causal connections. As a result, the relationships identified by traditional correlation-based statistical methods are of limited value for driving cost reductions. Moreover, the magnitude of cost reductions cannot be confidently derived from the current body of cost-estimation work.

At the SEI, I am working with a team of researchers including

Why Do We Care About Causal Learning?

If we want to proactively control outcomes, we would be far more effective if we knew which organizational, program, training, process, tool, and personnel factors actually caused the outcomes we care about. To know this information with any certainty, we must move beyond correlation and regression to the subject of causation. We also want to establish causation without the expense and challenge of conducting controlled experiments, the traditional approach to learning about causation within a domain. Establishing causation with observational data remains a vital need and a key technical challenge.

Different approaches have arisen to help determine which factors cause which outcomes from observational data. Some approaches are constraint-based, in which conditional independence tests reveal the graphical structure of the data (e.g., if two variables are independent conditioned on a third, there is no edge connecting them). Other approaches are score-based, in which causal graphs are incrementally grown and then shrunk to improve the score (likelihood-based) of how well the resulting graph fits the data. Such a categorization, however, barely does justice to the diversity of causal algorithms that have been developed to address non-linear--as well as linear--causal systems with Gaussian-- as well as non-Gaussian--error terms.

Judea Pearl, who is credited as the inventor of Bayesian Networks and one of the pioneers of the field, wrote the following about causal learning in Bayesian Networks to Causal and Counterfactual Reasoning:

The development of Bayesian Networks, so people tell me, marked a turning point in the way uncertainty is handled in computer systems. For me, this development was a stepping stone towards a more profound transition, from reasoning about beliefs to reasoning about casual and counterfactual relationships.

Causal learning has come of age from both a theoretical and practical tooling standpoint. Causal learning may be performed on data whether it is derived from experimentation or passive observation. Causal learning has been used recently to identify causal factors within domains such as energy economics, foreign investment benefits, and medical research.

For the past two years, our team has trained in causal learning (causal discovery and causal estimation) at Carnegie Mellon University (CMU) through workshops led by Dr. Richard Scheines, Dean of the Dietrich College of Humanities and Social Sciences, whose research examined graphical and statistical causal inference and foundations of causation. We have also trained and worked with several faculty and staff within the CMU Department of Philosophy led by Dr. David Danks, whose research focuses on causal learning (human and machine). Additional Department of Philosophy faculty and staff include Peter Spirtes, a professor of graphical and statistical modeling of causes; Kun Zhang, professor in time-series-oriented causal learning; Joseph Ramsey, the director of research computing; and Madelyn Glymour, causal analyst and co-author of a causal-learning primer.

What Are the Causal Factors Driving Software Cost?

Armed with this new approach, we are exploring causal factors driving software cost within DoD programs and systems. For example, vendor tools might identify anywhere between 25 and 45 inputs that are used to estimate software development. We believe that the great majority of these factors are not causal and thus distract program managers from identifying factors that should be controlled. For example, our approach may identify that one causal factor in software cost is the level of a programmer's experience. Such a result would allow the DoD to recommend and review the level of experience of programmers involved in a software-development project.

The first phase of our research began an initial assessment of causality at multiple levels of granularity to identify and evaluate families of factors related to causality (e.g., customer interface, team composition, experience with domain and tool, design and code complexity, and quality). We have begun to analyze and apply this approach to software development project data from several sources, including:

We are initiating collaborations with major commercial cost-estimation-tool vendors. Through these collaborations, the vendors will be instructed by our research team in conducting their own causal studies on their individual proprietary cost-estimation databases and sharing sanitized causal conclusions with our research team.

We are also working with two doctoral students advised by Dr. Barry Boehm, creator of the Constructive Cost Model (COCOMO) and founding director of the Center for Systems and Software Engineering at the University of Southern California (Los Angeles). Last year, our team trained these and other students in causal learning and began coaching them to analyze their tightly controlled COCOMO research data. An initial paper summarizing the work has been published and several other papers will be presented at the 2018 Acquisition Research Symposium.

The tooling for causal learning consists primarily of the Tetrad tool, an open-source program developed and maintained by CMU and University of Pittsburgh researchers, through the Center for Causal Discovery, that provides access to several dozen causal-learning algorithms to enable causal search, inference, and estimation. The aim of Tetrad is to provide a more complete set of causal search algorithms in a friendly graphical user interface requiring no programming knowledge.

Looking Ahead

While we are excited with the potential of causal learning, we are limited by the availability of relevant research data. In many cases, available data may not be shareable due to proprietary concerns or federal regulations governing the access to software project data. The SEI is currently participating in a pilot program through which more software program data may be made available for research. We continue to solicit additional research collaborators who have access to research or field data and who want to learn more about causal learning.

Having recently completed a first-year exploratory research project on applying causal learning, we just embarked on a three-year research project to explore more specific factors at greater depth including where alternative factors may identify more causal influences of DoD software program cost. These additional factors will then be incorporated into the causal model to provide a holistic causal cost model for further research and tool development, as well as guidance for software development and acquisition program managers.

Additional Resources

Learn more about Causal Learning:

CMU Open Learning Initiative: Causal & Statistical Reasoning

This tutorial on Causal Learning by Richard Scheines is part 1 of the Workshop on Case Studies of Causal Discovery with Model Search, held on October 25-27, 2013, at Carnegie Mellon University.

Center for Causal Discovery Summer Seminar 2016 Part 1: An Overview of Graphical Models, Loading Tetrad, Causal Graphs, and Interventions

Center for Causal Discovery Summer Seminar 2016 Part 2

Center for Causal Discovery Summer Seminar 2016 Part 3

Center for Causal Discovery Summer Seminar 2016 Part 4

Center for Causal Discovery Summer Seminar 2016 Part 5

Center for Causal Discovery Summer Seminar 2016 Part 6

Center for Causal Discovery Summer Seminar 2016 Part 7

Center for Causal Discovery Summer Seminar 2016 Part 8

Center for Causal Discovery Summer Seminar 2016 Part 9

Center for Causal Discovery Summer Seminar 2016 Part 10

Center for Causal Discovery Summer Seminar 2016 Part 11

Center for Causal Discovery Summer Seminar 2016 Part 12

Center for Causal Discovery Summer Seminar 2016 Part 13

The Book of Why: The New Science of Cause and Effect 1st Edition

Causal Inference in Statistics: A Primer

Discovering Causal Structure: Artificial Intelligence, Philosophy of Science, and Statistical Modeling

Counterfactuals and Causal Inference: Methods and Principles for Social Research (Analytical Methods for Social Research)

Handbook of Causal Analysis for Social Research (Handbooks of Sociology and Social Research) Oct 11, 2014

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed