Final Installment: 7 Recommended Practices for Monitoring Software-Intensive System Acquisition (SISA) Programs
This is the third installment in a series of three blog posts highlighting seven recommended practices for monitoring software-intensive system acquisition (SISA) programs. This content was originally published on the Cyber Security & Information Analysis Center's website online environment known as SPRUCE (Systems and Software Producibility Collaboration Environment). The first two posts in the series explored the challenges to monitoring SISA programs and presented the first five recommended best practices:
- Address in contracts
- Set up a dashboard
- Assign and train staff in its interpretation
- Update regularly
- Discuss in program reviews and as needed
This post, which can be read in its entirety on the SPRUCE website, will present the final two recommendations, as well as conditions that will allow organizations to derive the most benefit from these practices.
6. Refresh measures for each new phase/milestone
The program dashboard is a focusing tool to use during program reviews through the life of the program, but the measurements on the dashboard change as the program progresses from phase of work to phase of work. Through the life of the program, the dashboard helps the program manager revisit the same basic questions, but they are not the same specific questions. As mentioned in the first practice, at the beginning of a new development phase, contractors should provide the set of measures they intend to use to report progress. The acquisition team needs to become familiar with these measures and their definitions as described in the third practice.
An example of how measures may change reflects the difference between development activity and operational testing. The measurement of progress changes from deliverables completed to test cases passed. Test cases passed are easier to observe, and so progress is more apparent. Measuring effectiveness of the process is also performed differently. During operational testing, we should observe how long it takes to resolve discovered defects and how much effort is involved in defect tracking. The cycle time for defect management is often directly related to some set of decision processes, and thus its measurement is an indicator of process effectiveness. Such measures of process effectiveness can be supplemented with a count or percentage of "bad test cases." A bad test case is a test that fails whose cause is traceable to an error in the definition of the test--thus representing a process problem rather than a product problem.
7. Set new expectations and negotiate changes to commitments with stakeholders
As a result of the program review or other discussion (fourth and fifth practices), the program manager may conclude that stakeholder expectations need to be reset or commitments need to be changed. The content of a program dashboard, its analysis, and subsequent discussions with the contractor serve as the justification for the reset/change and also serve as the basis for determining what changes to stakeholder commitments are necessary. Giving due diligence to the changes implied by the dashboard helps to sustain stakeholder confidence in the PM's ability to recognize when changes are needed and helps maintain stakeholder interest and support for the program.
Sometimes, the nature of the change is fairly significant. For example: prototypes or simulations are typically performed for any high-risk change to the technology used. The challenge the contractor needs to address is not simply to prove that the technology itself works; rather, the challenge is to prove that the contractor can develop a product that uses the new technology in an integrated product and further that the contractor is prepared to test a product that contains the new technology. It is not unusual to discover that the new technology is much more difficult to apply than expected and that more schedule and perhaps additional resources are required. If the prototype is on the critical path for development and testing, it is virtually certain to delay product delivery. Such significant changes are likely to be reflected in all four quadrants of the dashboard.
Remember also that many of a program's stakeholders have negotiated commitments with their own stakeholders--commitments they are not likely to share with the program manager to retain flexibility in what they can claim they can or can't do when renegotiating commitments. The dashboard evidence the program manager provides about his or her own schedule and resource forecasts, product quality, and the other topics touched on by the four quadrants--but that also are of potentially keen interest to stakeholders--gives the program manager the leverage he or she needs to enter into some potentially complex discussions with stakeholders.
Under what conditions will organizations derive the most benefit from these practices?
When an organization neglects the following factors, the effectiveness of the monitoring practices may be severely limited:
- Select capable contractors
Of course, whether they have experience in the domains covered by the program should be the primary consideration. Select- those who have a history of measuring and analyzing their development activities, artifacts, and defects; and experience interpreting such measurements to control their projects
- those who have experience defining, using, and improving their development and sustainment processes, e.g., those who demonstrate compliance with various best practice standards (e.g., ISO 9000 or CMMI)
- those who already have the capability to specify, use, analyze, and improve measures that directly or indirectly address the information needs of a program manager
- Staff a capable team
One key to success is to ensure that team members have the training and mentoring they need to make sound technical judgments. Teams must be empowered and encouraged to define their own work processes and regularly evaluate the quality of their work and gauge the progress made. Other enablers include having staff, not necessarily all staff, but one or more- assigned and trained in how to interpret the dashboards
- experienced in interpreting trend charts, expertise in earned value management (EVM) systems, and using process measures, e.g., project yield, defect density, and defect escape
- who can, from a set of information needs, objectives, and associated measures, identify gaps in what those measures might cover
- experienced with reviewing artifacts similar to those that will be used on the program
- knowledgeable of process and quality system standards, both strengths and limitations
- experienced in change management, risk management, and quality planning
- experienced in peer reviews, unit tests, operational tests, functional tests, etc.
- familiar with assurance cases if program will be using same
- experienced in reviewing progress measurements during program reviews, probing for context, and interpreting the meaning of a change in one of those measures
- experienced in explaining measurements and their interpretation to stakeholders
- experienced in negotiating commitments with stakeholders
- Infrastructure for measurement data (on both sides)
A capable team needs a capable infrastructure. Use of a program dashboard requires collecting and organizing a lot of contractor data and generating charts relating these measures to time and resource objectives. A measurement infrastructure (e.g., database and chart generator) can assist in time stamping the data, linking the data to its source, organizing the data, projecting different views, storing the results, and incorporating into reports.
Looking Ahead
Below is a listing of selected resources where readers can go for more information on the topics covered in this series of posts. We have also added links to various sources to help amplify a point. Please be mindful that such sources may occasionally include material that might differ from some of the recommendations in the article above and the references below.
Resources
Richard Crume. "Who Is to Blame When Government Contracts Go Astray?" IACCM, 2008. https://www.iaccm.com/resources/?id=7967&cb=1558895257
Adrian Pitman, Elizabeth K. Clark, Bradford K. Clark, & Angela Tuffley. "An Overview of the Schedule Risk Assessment Methodology (SCRAM)." Journal of Cyber Security & Information Systems 1, 4 (2013). https://www.csiac.org/journal-article/an-overview-of-the-schedule-compliance-risk-assessment-methodology-scram/
Defense Acquisition University. Defense Acquisition Guidebook. DAU, 2013. https://www.dau.mil/tools/dag
Steve McConnell. Software Estimation: Demystifying the Black Art (Best Practices Series). Microsoft Press, 2006.
Software Program Managers' Network. The Program Manager's Guide to Software Acquisition Best Practices. Computers & Concepts Associates, 1998. https://www.coursehero.com/file/p67sh26/The-Program-Managers-Guide-to-Software-Acquisition-Best-Practices-Department-of/
More By The Author
7 Recommended Practices for Managing Intellectual Property in the Acquisition of Software-Intensive Systems
• By SPRUCE Project
7 Recommended Practices for Managing Intellectual Property in the Acquisition of Software-Intensive Systems
• By SPRUCE Project
Get updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.
Subscribe Get our RSS feedGet updates on our latest work.
Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.
Subscribe Get our RSS feed