search menu icon-carat-right cmu-wordmark

Second Installment: 7 Recommended Practices for Monitoring Software-Intensive System Acquisition (SISA) Programs

SPRUCE Project
CITE

This is the second installment in a series of three blog posts highlighting seven recommended practices for monitoring software-intensive system acquisition (SISA) programs. This content was originally published on the Cyber Security & Information Analysis Center's website online environment known as SPRUCE (Systems and Software Producibility Collaboration Environment. The first post in the series explored the challenges to monitoring SISA programs and presented the first two recommended best practices:

  1. Address in contracts
  2. Set up a dashboard

This post, which can be read in its entirety on the SPRUCE website, will present the next three best practices.

3. Assign and train staff to interpret the dashboards

The program dashboard is a tool that helps the PM understand a program's progress or status; whether the PM or contractor should take action to bring the progress of the program back into alignment with the program's (and stakeholders') existing goals and expectations; and when the PM must alert stakeholders to a problem, reset expectations, or renegotiate commitments.

A dashboard includes measurements that are organized into one or more charts that show, for example, how the measure changes over time.

Since each of the four quadrants of the dashboard may require several measures or charts, the individual PM is not likely to understand every nuance of their interpretation. An example is a contractor's reported defect-detection rate during verification (e.g., peer reviews, integration testing). If it is high, does that mean that the resulting product will have few faults and vulnerabilities? Or does it mean that many defects were introduced early and verification is finding only the easily discovered ones? And what do the answers to these questions mean to a PM's commitments with other stakeholders, especially those who provide services essential to the program (e.g., test-lab access, training, and piloting) and who have made plans around a particular delivery date?

The PM can assign measures used for the dashboard charts to different individuals in the program office. Each person should become an expert on interpreting that particular type of measure and changes in that measure. The staff should also be able to identify what additional information is needed to understand the causes and significance of a change in a measure and to probe when a measurement is different than expected. Multiple causes are possible for any deviation in the expected data; in general, it is not possible to diagnose the cause of a deviation (and thus its mitigation or resolution) without discussion with the contractor. This interpretation capability is important not just for the program reviews but also for helping the PM reset expectations with stakeholders for schedule, cost, and quality. To provide a factual and compelling explanation for the redirection or other action taken, the PM needs the background behind the measures and the ability to explain what they mean or don't mean.

Because different contractors may use different measures, and because the measures may change from phase to phase, staff training on measurements may be particular to the contractor and phase. Small differences in terminology can lead to important nuances of interpretation, and this leads to another reason for assigning different measures to different staff: it allows deep understanding and experience with the particular measures to develop.

4. Populate and update the dashboard regularly and when needed

Information contained in the dashboard can require review or change at different times. As the contract proceeds, the PM updates the dashboard with the measurements regularly provided by the contractor. When milestones are reached, the PM needs to determine with contractor input the specific questions that will be asked in the next phase for each quadrant and what measurements the contractor will provide to help answer those questions. It is also necessary to account for changes that may occur asynchronously:

  • a contractor may initiate a change (e.g., due to a resolution of a problem; or a change in technology)
  • a government agency may initiate a change (e.g., in program scope or funding)

In what follows, we provide motivation for and describe what types of measures might be used in each quadrant of the dashboard.

Cost, Schedule, Resource Availability, and Predictability (Quadrant 1)

Motivation: the PM needs to use forecasts to request commitments from stakeholders. If the forecast is very different from the plan, the plan is probably in error and must be updated.

Earned Value Management (EVM) is a de facto industry standard for cost and schedule reporting. One of the principal reasons is the capability it provides to make bounded forecasts of schedule completion and estimated cost at completion (EAC). Contrast this capability with what kind of forecast one obtains from the frequently used Gantt chart. The Gantt chart shows the plan and the work completed. The PM may be able to interpret the potential for a schedule slip but lacks the basis for predicting how much it will slip. Thus, EVM is more likely to provide realistic forecasts. A realistic forecast of a major cost overrun or schedule slip will cause an immediate reaction by the PM. He or she knows that something must be done and that decisions will have to be made.

Similarly, a resource-availability problem has to be escalated quickly. If the PM is aware that a critical test needs to be performed by the end of the month, he or she must immediately confirm resource availability. Similarly if critical resources are not available when needed, some work will have to be reprioritized and rescheduled. Significant changes may result in a need for program replanning.

Decisions of what to do when cost, schedule, resource, or staffing issues arise must be made carefully but quickly to limit the impacts because so many schedule factors are interdependent. Terrible program-schedule problems occur when different tasks shift onto and off of the project critical path. These problems can make the schedule completely unpredictable and may cause the PM to no longer be trustworthy to external stakeholders.

Scope, Progress, and Change Management (Quadrant 2)

Progress is usually recorded when some portion of the work passes from a single engineer to become a team responsibility and then as the work passes from team to team. Progress must be measured against some planned (collection of) deliverable(s). Often, the deliverable can be structured as a collection of smaller artifacts that can be counted to provide a measure of size. For example, testing progress can be measured as a count or percentage of successful completion of a set of critical test cases. A count or percentage of engineering drawings completed is another example of measurement by collection.

Sometimes deliverable completion is better represented by a combination of measures. Sometimes, measures of physical size or complexity are used (thus allowing "normalization" of the measures, which provides a sounder basis for estimating productivity). Other times, the measures are derived from a decomposition of the effort into smaller tasks to produce the artifact and their attributes; for example, a design may be represented by a set of prototypes, fan-in/fan-out measures, and number of software integration steps. More generally, the completion of some activities may be represented by proxies.

Software progress is usually measured in lines of code or some measure of functional size as the code is stored in the configuration-management repository and passes unit-level testing--as this is the point where responsibility of the code passes from the single developer to a team responsibility. If a measure such as story points is used, the developers already have a notion of a combined measure for size and complexity that is less formal than function points but is probably satisfactory within the team. As a practical matter, it is difficult to rely on story points if one wants a productivity measure for the organization.

Scope, on the other hand, is not about intermediate deliverables. Instead, what is included in the scope is every deliverable that must leave the control of the project team. If the product is to be delivered on two platforms instead of one, that is an element of scope. If the product prototype is to go to a national engineering show, that is a change in scope. If a visiting general officer is planning a visit for a demonstration, that is also a change in scope that will consume resources and has a milestone date. All such deliverables must pass some external quality control. For each element of scope, there is a cost that can be assigned, progress can be measured, and delivery forecasted.

Effectiveness of Process and Risk Management (Quadrant 3)

Acquisition programs hire contractors that promise an effective and productive process. The contractor may even promise to meet ISO 9000 standards or CMMI Level 5. These external measures of process quality are very important to the purchaser and acquirer (see Davenport 2005). Attestations of compliance are evidence that contractors are capable of keeping commitments if they are allowed to follow their defined processes. A PM who asks the contractor to skip steps in a process invites problems that will likely arise sooner or later in the development process. Instead, the acquirer should work to achieve a basic understanding of the processes the contractor intends to use on the program. This pursuit of understanding can begin by having the acquisition team ascertain the boundaries of uncertainty in the product definition and product quality as the program's stakeholders see it as compared to the contractor's understanding and what the contractor has and is capable of delivering.

Not many processes need to be monitored, but the contractor's change-management, quality-management, and risk-management processes are three that do need to be monitored. The contractor should have a change-management process whereby changes to the project plan and to the requirements baseline and other specifications can be monitored. Frequent changes suggest a potential risk inherent in the predictability of forecasts. The contractor should also have a quality-management plan. The quality-management plan should specify the number of resources assigned to verification and quality-assurance activities and describe the frequency of such quality interventions. The quality-management plan should also describe how the results of these quality activities will be aggregated and conveyed to the program office.

A typical mechanism for demonstrating process effectiveness is a measure of defect escapes. The defect-escape report is evidence that the contractor is using frequent quality checks of the work and that the resources assigned to quality activities are properly engaged.

Indicators for risk management also belong to the process-effectiveness quadrant. Risk management by its nature is of equal concern to both the contractor and the program office. Unlike most indicators for effective process, both contractor and program office have equal concerns to handle risk efficiently and effectively. The PM who assumes that the contractor is 100% responsible for managing risk deceives himself. If risk management is not tracked and risks are not mitigated, then the potential problems will become real issues and significant change management will need to occur. Both issues and change-management requests will consume significant resources and delay the program schedule.

Product Quality (Quadrant 4)

Typically, program offices have only the modest goal that the developed product should pass operational test. This is a good goal in and of itself, but operational test is a lagging rather than leading indicator of emerging product quality. This means that, if product quality is inadequate, relying only on operational test to discover problems with product quality leaves few options. A better approach is to identify and address quality problems early, when they can be resolved more quickly and easily, and with less impact to cost, schedule, and stakeholder expectations and commitments. Therefore, PMs should consider how to recognize and track product quality earlier.

For example, there could be a product-quality goal for defect density (perhaps a goal established for each product-development phase), which can work pretty well if the contractor has process and product historical data by which intermediate product quality can be related to final product quality. If the historical data is insufficient or inadequate to establish such relationships, this approach may not be so effective, as it is hard to gauge whether a particular defect density achieved at an early phase of the program will lead to serious problems in the long term.

When historical data is lacking, an alternative approach is to use various process heuristics. An example process heuristic is "the average defect removal rate during operational test is one and a half (1.5) defects per week." It is hard to do better than that because: (1) operational test failures result in long periods of analysis, rebuilding, and retesting; and (2) many operational test failures become "test blockers" that prevent the execution of other test cases. Such heuristics allow the acquisition team to work backwards from some target product density at release to the beginning of operational test, etc.

Another approach to tracking product quality is to develop validation or assurance cases. Assurance cases represent a potentially superior approach to checking product quality as the development team progresses through product development. It is perhaps the best way to measure (track, evaluate, assess) product quality throughout the life cycle. The earlier webpage on Technical Best Practices for Safety-Critical Systems includes a practice on building safety assurance cases. In some situations, the same approach can be used for other systemic quality attributes, such as system availability and security.

5. Include discussion of dashboard changes in program reviews and as needed

It is easy for program reviews to fail to investigate the right questions. For example, one program review went something like this:

PM: "Wow, you had this sudden increase in program defects this month, what changed?"
Contractor: "Actually, it's good news. We got many defects out of the way."

The contractor completely bypassed the PM's question, telling the PM something he thought the PM wanted to hear. The PM probably did not learn anything regarding

  • how so many defects got introduced into the product
  • what the contractor will do differently in the future
  • whether stakeholder expectations need to be reset
  • the viability of the PM's commitments to the program's stakeholders
  • what the PM can do to help

Instead, program reviews should include a discussion of recent contractor activities, what the contractor learned, what changed, and how these affect progress toward achieving the program's objectives. The program dashboard provides a vehicle for covering the right questions and guiding the discussion.

Before the program review, the program office produces the latest dashboard and compares to two (or more) earlier dashboards covering the time since the last program review to help identify where to probe. For example, if this review of the dashboard and other historical data for previous estimates of cost, schedule, etc. suggests a problematic trend, the PM may want to make this a focus of upcoming discussions with the contractor. Coming thus better prepared for the program review, the PM and the acquisition team, referring to the dashboard charts, can ask the question "Why did this change and what is the impact?" They are seeking to understand the reason for a change--not looking for excuses. The questions must be non-threatening but instead have the aim of discovering the causes or drivers of learning and change to help better gauge implications for program objectives, schedule, and commitments with stakeholders.

In this way, the program dashboard helps ask the right questions and helps keep the discussions on target during program reviews. The PM's thoughts will resemble something like this: "Why did this change, is it something that I need to do something about, is the contractor handling this issue, am I going to be able to keep my commitments to stakeholders, am I going to have to apologize to someone?" In a nutshell, the PM is trying to answer, "What am I going to have to do to ensure the best possible product within cost and schedule constraints?"

A few reminders/cautions: The dashboard doesn't answer the questions; rather, it tells where to probe. If something curious or potentially troubling is seen on the program dashboard, the dashboard doesn't provide the answer but helps initiate the discussion needed to get at the answer, which helps determine what, if anything, needs to be done about it. The acquisition team members assigned the particular measures (third practice) perform much of the actual probing. The team member assigned to a particular measure needs to know how and where to probe, to follow up, to be able to understand what a noticeable change in that measure signifies.

Looking Ahead

The final post in this series will present the final two recommendations as well as conditions that will allow organizations to derive the most benefit from these practices.

Below is a listing of selected resources where readers can go for a deeper dive into the topics covered in this post. We have also added links to various sources to help amplify a point. Please be mindful that such sources may occasionally include material that might differ from some of the recommendations in the article above and the references below.

Resources

Richard Crume. "Who Is to Blame When Government Contracts Go Astray?" IACCM, 2008. https://www.iaccm.com/resources/?id=7967&cb=1558895257

Adrian Pitman, Elizabeth K. Clark, Bradford K. Clark, & Angela Tuffley. "An Overview of the Schedule Risk Assessment Methodology (SCRAM)." Journal of Cyber Security & Information Systems 1, 4 (2013). https://www.csiac.org/journal-article/an-overview-of-the-schedule-compliance-risk-assessment-methodology-scram/

Defense Acquisition University. Defense Acquisition Guidebook. DAU, 2013. https://www.dau.mil/tools/dag

Steve McConnell. Software Estimation: Demystifying the Black Art (Best Practices Series). Microsoft Press, 2006.

Software Program Managers' Network. The Program Manager's Guide to Software Acquisition Best Practices. Computers & Concepts Associates, 1998. https://www.coursehero.com/file/p67sh26/The-Program-Managers-Guide-to-Software-Acquisition-Best-Practices-Department-of/

CITE

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed