search menu icon-carat-right cmu-wordmark

What’s Going On in My Program? 12 Rules for Conducting Assessments

Headshot of Bill Novak.

Larger-scale acquisition programs are always daunting in their size and complexity. Whether they are developing commercial or government systems, they are hard to manage successfully under the best of circumstances. If (or when) things begin to go poorly, however, program-management staff will need every tool at their disposal to get things back on track.

One of those tools is conducting an assessment of the program, which variously may be called an independent technical assessment (ITA), an independent program assessment (IPA), or a red team; or may simply be a review, investigation, evaluation, or appraisal. Whatever the name, the goal of such activities is to produce objective findings about the state of a program, and recommendations for improving it. Assessments are an indispensable way for a program or project management office (PMO) to try to get an accurate understanding of how things are going and what actions can be taken to make things better. If you’re considering sponsoring such an assessment for your project or program, this blog post provides 12 useful rules to follow to make sure it gets done right, based on our experience at the SEI in conducting system and software assessments of large defense and federal acquisition programs.

I'd also like to gratefully acknowledge my colleagues at MITRE, most notably Jay Crossler, MITRE technical fellow, who collaborated closely with me in co-leading many of the joint-FFRDC assessments that provided the basis for the ideas described in this blog post.

Managing the Assessment: Starting Out and Staying on Track

When you launch an assessment, you must properly address some fundamentals. You can help to ensure a top-quality result by choosing the right organization(s) to conduct the assessment, providing sufficient resources, and asking a few key questions to ensure objectivity and keep things moving along the way.

1. Make sure you get the most skilled and experienced team you can.

Competence and applicable skills are the essentials for good-quality results.

Assessment teams should be composed of individuals who have a variety of different skills and backgrounds, including years of experience conducting similar kinds of assessments, domain expertise, multiple relevant areas of supporting technical expertise, and organizational expertise. This goal can be accomplished in part by selecting the most appropriate organization(s) to conduct the assessment, as well as ensuring that the organization’s expertise is appropriate and sufficient for the task and that they have significant experience in conducting them.

An assessment team may consist of a small set of core team members but should also have the ability to involve people in their parent organization(s) as needed for more specialized expertise that may not be known until the assessment is underway. Teams should also have technical advisors—experienced staff members available to provide insight and direction to the team, coach the team lead, and act as critical reviewers. Finally, assessment teams need people to fill the critical roles of leading interviews (and knowing how to ask follow-up questions, and when to pursue additional lines of inquiry), contacting and scheduling interviewees, and storing, securing, and organizing the team’s data. The deeper the level of auxiliary expertise available to the team, the better the analysis.

The assessment team’s diversity of areas of expertise is what allows them to function most effectively and produce more key insights from the data they collect than they could have done individually. The lack of such diverse skills on the team will directly and adversely affect the quality of the delivered results.

2. Set up the assessment team for success from the start.

Make sure the team has sufficient time, funding, and other resources to do the job properly.

Assessments are inherently labor-intensive activities that require significant effort to produce a quality result. While the costs will vary with the size and scope of the program being assessed, the quality of the deliverable will vary in direct proportion to the investment that’s made. This relationship means that the experience level of the team is a cost factor, as is the breadth and depth of scope, and also the duration. The available funding should reflect all these factors.

In addition, it’s important to ensure that the team has (and is trained in) the best tools available for collecting, collaborating, analyzing, and presenting the large amounts of information they will be working with. Assessments that must occur in unrealistically short timeframes, such as four to six weeks, or on budgets insufficient to support a team of at least three to five people devoting a majority of their time to it, will rarely produce the most detailed or insightful results.

3. Keep the assessment team objective and unbiased.

Objective, accurate results come only from unbiased assessment teams.

The “independent” aspect of an independent technical assessment is ignored at your peril. In one assessment, a program brought a consultant organization on board to do work closely related to the area being assessed. Since there was potential synergy and sharing of information that could help both teams, the program office suggested creating a hybrid assessment team between the federally funded research and development center (FFRDC)-based assessment and the consultants. The consultant team endorsed the idea, anticipating the detailed level of access to information that they would get, but the FFRDC staff were concerned about the loss of the consultant’s objectivity in the pursuit of their planned follow-on work and their eagerness to please the program office. Assessment teams know that their potentially critical findings may not always be met with a warm reception, thereby creating difficulties when the objective for the consultant is to establish a multi-year engagement with the organization being assessed.

Including anyone on an assessment team who has a stake in the results, whether they are from the government, the PMO, a contractor, or a vested stakeholder (who may be either positively or negatively predisposed) could introduce conflict within the team. Moreover, their mere presence could undermine the perceived integrity and objectivity of the entire assessment. An assessment team should be composed solely of neutral, independent team members who are willing to report all findings honestly, even if some findings are uncomfortable for the assessed organization to hear.

4. Clear the team a path to a successful assessment.

Help the assessment team do their job by removing obstacles to their progress so they can gather the data they need. More data means better and more compelling results.

One result of an independent assessment that may surprise both individuals and organizations is that an independent assessment can be beneficial to them as well as to the program, because it can help to surface key issues so they get the attention and resources needed to resolve them. If no one had concerns about the fallout of making certain statements publicly, someone probably would have already stated them. That some important facts are already known among some program staff—and yet remain unexpressed and unrecognized—is one of the key reasons for conducting an independent assessment; namely to ensure that those issues are discussed candidly and addressed properly.

Assessment teams should be expected to provide weekly or bi-weekly status reports or briefings to the sponsor point of contact—but these should not include information on interim or preliminary findings. In particular, early findings based on partial information will invariably be flawed and misleading. Such briefings should instead focus on the process being followed, the numbers of interviews conducted and documents reviewed, obstacles encountered and potential interventions being requested, and risks that may stand in the way of completing the assessment successfully. The goal is that progress reporting focuses on the facts needed to ensure that the team has the access and data they need. This structure of events may be disappointing when stakeholders are impatient to get early previews of what is to come, but early previews are not the purpose of these meetings.

The assessment team also must be able to access any documents and interview any people they identify as being relevant to the assessment. These interviews should be granted regardless of whether they’re with the PMO, the contractor, or an external stakeholder organization. If the assessment team is having trouble scheduling an interview with a key person, access should be provided to ensure that the interview happens.

If there are difficulties in gaining access to a document repository the team needs to review, that access must be expedited and provided. Data is the fuel that powers assessments, and limiting access to it will only slow the speed and reduce the quality of the result. In one program, the contractor didn’t allow the assessment team access to its developers for interviews, which both skewed and significantly slowed data gathering. The issue was resolved through negotiation and interviews proceeded, but it raised a concern with the PMO about the contractor’s commitment to supporting the program.

Until the final outbriefing has been completed and presented—and the focus shifts to acting on the recommendations—your role as the sponsor is to help the assessment team do their job as effectively, quickly, and efficiently as they can, with as few distractions as possible.

Depth and Breadth: Defining Scope and Access Considerations

Providing basic guidelines to the team on the intended scope to cover is key to conducting a practicable assessment, as it makes the primary assessment goals clear.

5. Keep the scope focused primarily on answering a few key questions, but flexible enough to address other relevant issues that come up.

Overly narrow scope can prevent the assessment team from looking at issues that may be relevant to the key questions.

You will need to provide a few questions that are essential to answer as part of the assessment, such as: What happened with this program? How did it happen? Where do things stand now with the program? Where could the program go from here? What should the program do? The assessment team needs the latitude to explore issues that, perhaps unbeknownst to the PMO, are affecting the program’s ability to execute. Narrowing the scope prematurely may eliminate lines of investigation that could be essential to a full understanding of the issues the program faces.

As the sponsor, you may wish to supply some hypotheses as to why and where you think the problems may be occurring. However, it’s essential to allow the team to uncover the actual relevant areas of investigation. Asking the team to focus on only a few specific areas may not only waste money on unproductive inquiry but may also yield incorrect results.

In another aspect of scope, it’s important to look at all key stakeholders involved in the program. For example, acquisition contracting requires close coordination between the PMO and the (prime) contractor, and it is not always apparent what the actual root cause of an issue is. Sometimes they result from cyclical causes and effects between the two entities that are both seemingly reasonable reactions, but that can escalate and cascade into serious problems. In one assessment, the PMO believed that many of the program’s issues stemmed from the contractor, when in fact some of the PMO’s directives had inadvertently overconstrained the contractor, creating some of those problems. Looking at the whole picture should make the truth evident and may be able to suggest solutions that would otherwise be hidden.

Information Handling: Transparency, Openness, and Privacy Considerations

During an assessment, several decisions must occur regarding the degree of transparency and information access that will be provided to the team, the protection of interviewee privacy, and which stakeholders will see the results.

6. Preserve and protect the promise of anonymity that was given to interviewees.

Promising anonymity is the only way to get the truth. Break that promise, and you’ll never hear it again.

The use of anonymous interviews is a key method of getting to the truth because people aren’t always willing to speak freely with their management because of how it might reflect on them, and out of concern for their position. Anonymity provides an opportunity for people to speak their minds about what they’ve seen and potentially provide key information to the assessment team. There can sometimes be a tendency on the part of program leadership to want to find out who made a certain statement or who criticized an aspect of the program that leadership deemed sacrosanct, but giving in to this tendency is never productive. After staff sees that leadership is willing to violate its promised anonymity, the word spreads, trust is lost, and few questions that claim to be “off the record” will receive honest answers again. Promising and preserving anonymity is a small price to pay for the enormous return on investment of revealing a key truth that no one had previously been able to say publicly.

7. Conduct assessments as unclassified activities whenever possible.

Assessments are about how things are being done—not what’s being done. They rarely need to be classified.

Even highly classified programs are still able to conduct valuable assessments at the unclassified or controlled unclassified information (CUI) level, because many assessments focus on the process by which the work is accomplished rather than the detailed technical specifics of what is being built. This type of analysis is possible because the types of problems that Department of Defense (DoD) and other federal acquisition programs tend to encounter most often are remarkably similar, even when the specific details of systems vary greatly across programs.

While some assessments focus on specific technical aspects of a system to understand an issue—or explore narrow technical aspects as part of a broader assessment of a program—most major assessments need to look at higher-level, program-wide issues that will have a more profound effect on the outcome. Due to these factors, assessments are largely able to avoid discussing specific system capabilities, specifications, vulnerabilities, or other classified aspects, and thus can avoid the much greater expense and effort involved in working with classified interviews and documents. When classified information is essential for a full understanding of a key issue, classified interviews can be conducted and classified documents reviewed to understand that portion of the system, and a classified appendix can be provided as a separate deliverable.

8. Commit to sharing the results, whatever they turn out to be.

Getting accurate information is the key to improving performance—once you have it, don’t waste it.

Real improvement requires facing some hard truths and addressing them. The best leaders are those who can use the truth to their advantage by demonstrating their willingness to listen, admitting mistakes, and committing to fixing them. In conducting assessments, there have been instances where leaders have been able to build up significant credibility by publicly acknowledging and dealing with their most significant issues. Once these issues are out in the open for all to see, those former weaknesses are no longer a vulnerability that can be used to discredit the program; instead they become just another issue to address.

9. Thank the messengers—even if they bring unwelcome news.

Don’t punish the assessment team for telling you what you needed to hear.

There are opportunities for leveraging the substantial and deep knowledge of the program that the assessment team has gained over the course of conducting the assessment that may be lost if the program is unhappy with the findings—which may have less to do with the correctness of the findings than it does with willingness of the program to hear and accept them. It’s important to maintain the proper perspective on the role of the assessment in uncovering issues—even potentially serious ones—and to appreciate the work that has been done by the team, even when it may not always reflect well on all aspects of the program. Now that these issues have been identified, they are known and can be acted upon. That is, after all, the reason the assessment was requested.

Dealing with Complexity: Making Sense of Large, Interconnected Systems

Large-scale systems are generally complex and often must interoperate closely with other large systems—and the organizational structures charged with developing these interoperating systems are often even more complex. Many acquisition problems—even technical ones—have their roots in organizational issues that must be resolved.

10. Simple explanations explain only simple problems.

Large programs are complex, as are the interactions within them. Data can determine the what of a problem, but rarely the explanation of why.

Many assessment findings are not independent, standalone facts that can be addressed in isolation, but are instead part of a web of interrelated causes and effects that must be addressed in its entirety. For example, a finding that there are issues with hiring and retaining expert staff, and another that points out recurring issues with productivity and meeting milestones, are often related. In one program assessment, the team traced slow business-approval processes to delays in the availability of the planned IT environment as being a significant source of staff frustration. This led to attrition and turnover, which resulted in a shortage of skilled staff that led to schedule delays, missed milestones, and increased schedule pressure. As a result, the contractor shortcut their quality processes to try to make up the time, which led to QA refusing to sign off on a key integration test for the customer.

Programs often have long chains of connected decisions and events with consequences that may manifest far away from their original root causes. Viewing the program as a complex and multi-dimensional system is one way to identify the true root causes of problems and take appropriate action to resolve them.

In trying to uncover those chains of decisions and events, quantitative statistical data may tell an incomplete story. For example, hiring and retention numbers can tell us a summary of what is happening with our staff overall, but can’t give us an explanation for it, such as why people are interested in working at an organization or why they may be planning to leave. As has been pointed out in Harvard Business Review, “data analytics can tell you what is happening, but it will rarely tell you why. To effectively bring together the what and the why—a problem and its cause... [you need to] combine data and analytics with tried-and-true qualitative approaches such as interviewing groups of individuals, conducting focus groups, and in-depth observation.”

Being able to tell the complete story is the reason why quantitative measurement data and qualitative interview data are both valuable. Interview data plays an essential role in explaining why unexpected or undesirable things are happening on a program—which is often the fundamental question that program managers must answer first before correcting them.

11. It’s not the people—it’s the system.

If the system isn’t working, it’s more likely a system problem rather than an issue with one individual.

There is a human tendency called attribution bias that encourages us to attribute failures in others as being caused by their inherent flaws and failings rather than by external forces that may be acting on them. It’s therefore important to view the actions of individuals in the context of the pressures and incentives of the organizational system they are part of rather than to think of them only as (potentially misguided) independent actors. If the system is driving inappropriate behaviors, the affected individuals shouldn’t be seen as the problem. One form that attribution bias may take is that when individual stakeholders start to believe their goals are not congruent with the goals of the larger program, they may rationally choose not to advance its interests.

For example, the time horizon of acquisition programs may be significantly longer than the likely tenure of many people working on those programs. People’s interests may thus be more focused on the health of the program during their tenure and may not be as concerned for its longer-term health. Such misaligned incentives may motivate people to make decisions in favor of short-term payoffs (e.g., meeting schedule), even if meeting those short-term objectives may undermine longer-term benefits (e.g., achieving low-cost sustainment) whose value may not be realized until long after they have left the program. These belong to a subclass of social-trap dilemmas called time-delay traps and include well-documented problems such as incurring technical debt through the postponement of maintenance activities. The near-term positive reward of an action (e.g., not spending on sustainment) masks its long-term consequences (e.g., cumulatively worse sustainment issues that accrue in the system), even though those future consequences are known and understood.

12. Look as closely at the organization as you do at the technology.

Programs are complex socio-technical systems—and the human issues can be more difficult to manage than the technical ones.

Systems are made up of interacting mechanical, electrical, hardware, and software components that are all engineered and designed to behave in predictable ways. Programs, however, are made up of interacting autonomous human beings and processes, and as a result are often more unpredictable and exhibit far more complex behaviors. While it may be surprising when engineered systems exhibit unexpected and unpredictable results, it is the norm for organizational systems.

As a result, most complex problems that programs experience involve the human and organizational aspects, and especially the alignment and misalignment of incentives. For example, a joint program building common infrastructure software for multiple stakeholder programs may be forced to make unplanned customizations for some stakeholders to keep them on board. These changes could result in schedule slips or cost increases that could drive out the most schedule-sensitive or cost-conscious stakeholder programs and cause rework for the common infrastructure, further driving up costs and delaying schedule, driving out still more stakeholders, and ultimately causing participation in the joint program to collapse.

It’s important to recognize that technical issues weren’t at the core of what doomed the acquisition program in this example. Instead, it was the misaligned organizational incentives between the infrastructure program’s attempt to build a single capability that everyone could use and the stakeholder programs’ expectation for only a functional capability to be delivered on time and within cost. Such stakeholder programs might opt for building their own one-off custom solutions when the common infrastructure isn’t available when promised. That is a classic instance of a program failure that has less to do with technical problems and more to do with human motivations.

Meeting Goals and Expectations for Program Assessments

The 12 rules described above are meant to provide some practical help to those of you considering assessing an acquisition program. They provide specific guidance on starting and managing an assessment, defining the scope and providing information access, handling the information coming out of the assessment appropriately, and understanding the general complexity and potential pitfalls of analyzing large acquisition programs.

In practice, an organization that has substantial prior experience in conducting independent assessments should already be aware of most or all these rules and should already be following them as part of their standard process. If this is the case, then simply use these rules to help ask questions about the way the assessment will be run, to ensure that it will be able to meet your goals and expectations.

Additional Resources

Read “Help Your Team Understand What Data Is and Isn’t Good For” by Joel Shapiro in Harvard Business Review, October 12, 2018.

Read the SEI Acquisition Archetypes, a set of briefs that describe some common acquisition patterns and help acquisition programs handle and avoid them.

Read the SEI technical report, Success in Acquisition: Using Archetypes to Beat the Odds, by William E. Novak and Linda Levine.

Read the SEI technical report, The Evolution of a Science Project: A Preliminary System Dynamics Modeling of a Recurring Software-Reliant Acquisition Behavior, by William E. Novak, Andrew P. Moore, and Christopher Alberts.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed