Stop Imagining Threats, Start Mitigating Them: A Practical Guide to Threat Modeling
PUBLISHED IN
Cybersecurity EngineeringWhen building a software-intensive system, a key part in creating a secure and robust solution is to develop a cyber threat model. This is a model that expresses who might be interested in attacking your system, what effects they might want to achieve, when and where attacks could manifest, and how attackers might go about accessing the system. Threat models are important because they guide requirements, system design, and operational choices. Effects can include, for example, compromise of confidential information, modification of information contained in the system, and disruption of operations. There are diverse purposes for achieving these kinds of effects, ranging from espionage to ransomware.
This blog post focuses on a method threat modelers can use to make credible claims about attacks the system could face and to ground those claims in observations of adversary tactics, techniques, and procedures (TTPs).
Brainstorming, subject matter expertise, and operational experience can go a long way in developing a list of relevant threat scenarios. During initial threat scenario generation for a hypothetical software system, it would be possible to imagine, What if attackers steal account credentials and mask their movement by putting false or bad data into the user monitoring system? The harder task—where the perspective of threat modelers is critical—substantiates that scenario with known patterns of attacks or even specific TTPs. These could be informed by potential threat intentions based on the operational role of the system.
Developing practical and relevant mitigation strategies for the identified TTPs is an important contributor to system requirements formulation, which is one of the goals of threat modeling.
This SEI blog post outlines a method for substantiating threat scenarios and mitigations by linking to industry-recognized attack patterns powered by model-based systems engineering (MBSE).
In his memo Directing Modern Software Acquisition to Maximize Lethality, Secretary of Defense Pete Hegseth wrote, “Software is at the core of every weapon and supporting system we field to remain the strongest, most lethal fighting force in the world.” While understanding cyber threats to these complex software intensive systems is important, identifying threats and mitigations to them early in the design of a system helps reduce the cost to fix them. In response to Executive Order (EO) 14028, Improving the Nation’s Cybersecurity, the National Institute of Standards and Technology (NIST) recommended 11 practices for software verification. Threat modeling is at the top of the list.
Threat Modeling Goals: 4 Key Questions
Threat modeling guides the requirements specification and early design choices to make a system robust against attacks and weaknesses. Threat modeling can help software developers and cybersecurity professionals know what types of defenses, mitigation strategies, and controls to put in place.
Threat modelers can frame the process of threat modeling around answers to four key questions (adapted from Adam Shostack):
- What are we building?
- What can go wrong?
- What should we do about those wrongs?
- Was the analysis sufficient?
What are we building? The foundation of threat modeling is the model of the system focused on its potential interactions with threats. A model is a graphical, mathematical, logical, or physical representation that abstracts reality to address a particular set of concerns while omitting details not relevant to the concerns of the model builder. There are many methodologies that provide guidance on how to construct threat models for different types of systems and use cases. For already built systems where the design and implementation are known and where the principal concerns relate to faults and errors (rather than acts by intentioned adversaries), techniques such as fault tree analysis may be more appropriate. These techniques generally assume that desired and undesired states are known and can be characterized. Similarly, kill chain analysis can be helpful to understand the full end-to-end execution of a cyber attack.
However, existing high-level systems engineering models may not be appropriate to identify specific vulnerabilities used to conduct an attack. These systems engineering models can create useful context, but more modeling is necessary to address threats.
In this post I use the Unified Architecture Framework (UAF) to guide our modeling of the system. For larger systems employing MBSE, the threat model can build on DoDAF, UAF, or other architectural framework models. The common thread with all of these models is that threat modeling is enabled by models of information interactions and flows among components. A common model also gives benefits in coordination across large teams. When multiple groups are working on and deriving value from a unified model, the up-front costs can be more manageable.
There are many notations for modeling data flows or interactions. We explore in this blog the use of an MBSE tool paired with a standard architectural framework to create models with benefits beyond simpler diagramming tool or drawings. For existing systems without a model, it is still possible to use MBSE. This can be done incrementally. For instance, if new features are being added to an existing system, it may be necessary to model just enough of the system interacting with the new information flows or data stores and create threat models for this subset of new elements.
What Can Go Wrong?
Threat modeling is similar to systems modeling in that there are many frameworks, tools, and methodologies to help guide development of the model and identify potential problem areas. STRIDE is threat identification taxonomy that is a useful part of modern threat modeling methods, having originally been developed at Microsoft in 1999. Previous work by the SEI has been conducted to extend UAF with a profile that allows us to model the results of the threat identification step that uses STRIDE. We continue that approach in this blog post.
STRIDE itself is an acronym standing for spoofing, tampering, repudiation, information disclosure, denial of service, and elevation of privilege. This mnemonic helps modelers to categorize the impacts of threats on different data stores and data flows. Previous work by Scandariato et al., in their paper A descriptive study of Microsoft’s threat modeling technique has also shown that STRIDE is adaptable to multiple levels of abstraction. This paper shows that multiple teams modeling the same system did so with varying size and composition of the data flow diagrams used. When working on new systems or a high-level architecture, a threat modeler may not have all the details needed to take advantage of some more in-depth threat modeling approaches. This is a benefit of the STRIDE approach.
In addition to the taxonomic structuring provided by STRIDE, having a standard format for capturing the threat scenarios enables easier analysis. This format brings together the elements from the systems model, where we have identified assets and information flows, the STRIDE method for identifying threat types, and the identification of potential categories of threat actors who might have intent and means to create conequences. Threat actors can range from insider threats to nation-state actors and advanced persistent threats. The following template shows each of these elements in this standard format and contains all of the essential details of a threat scenario.
An [ACTOR] performs an [ACTION] to [ATTACK] an [ASSET] to achieve an [EFFECT] and/or [OBJECTIVE].
ACTOR | The person or group that is behind the threat scenario
ACTION | A potential occurrence of an event that might damage an asset or goal of a strategic vision
ATTACK | An action taken that uses one or more vulnerabilities to realize a threat to compromise or damage an asset or circumvent a strategic goal
ASSET | A resource, person, or process that has value
EFFECT | The desired or undesired consequence
OBJECTIVE | The threat actor's motivation or objective for conducting the attack
With formatted threat scenarios in hand, we can start to integrate the elements of the scenarios into our system model. In this model, the threat actor elements describe the actors involved in a threat scenario, and the threat element describes the threat scenario, objective, and effect. From these two elements, we can, within the model, create relations to the specific elements affected or otherwise related to the threat scenario. Figure 1 shows how the different threat modeling pieces interact with portions of the UAF framework.

For the diagram elements highlighted in red, our team has extended the standard UAF with new elements (<<Attack>>, <<Threat>>, <<Threat Actor>> and <<Security Requirement>> blocks) as well as new relationships between them (<<Causes>>, <<Realizes Attack>> and <<Compromises>>). These additions capture the effects of a threat scenario in our model. Capturing these scenarios helps answer the question, What can go wrong?
Here I provide an example of how to apply this profile. First, we need to define part of a system we want to build and some of the components and their interactions. If we are building a software system that requires a monitoring and logging capability, there could be a threat of disruption of that monitoring and logging service. An example threat scenario written in the style of our template would be, A threat actor spoofs a legitimate account (user or service) and injects falsified data into the monitoring system to disrupt operations, create a diversion, or mask the attack. This is a good start. Next, we can incorporate the elements from this scenario into the model. Represented in a security taxonomy diagram, this threat scenario would resemble Figure 2 below.

What is important to note here is that the threat scenario a threat modeler creates drives mitigation strategies that place requirements on the system to implement these mitigations. This is, again, the goal of threat modeling. However, these mitigation strategies and requirements ultimately constrain the system design and could impose additional costs. A primary benefit to identifying threats early in system development is a reduction in cost; however, the true cost of mitigating a threat scenario will never be zero. There is always some trade-off. Given this cost of mitigating threats, it is vitally important that threat scenarios be grounded in truth. Ideally, observed TTPs should drive the threat scenarios and mitigation strategies.
Introduction to CAPEC
MITRE’s Common Attack Pattern Enumerations and Classifications (CAPEC) project aims to create just such a list of attack patterns. These attack patterns at varying levels of abstraction allow an easy mapping from threat scenarios for a specific system to known attack patterns that exploit known weaknesses. For each of the entries in the CAPEC list, we can create <<Attack>> elements from the extended UAF viewpoint shown in Figure 1. This provides many benefits that include refining the scenarios initially generated, helping decompose high-level scenarios, and, most crucially, creating the tie to known attacks.
In the Figure 2 example scenario, at least three different entries could apply to the scenario as written. CAPEC-6: Argument Injection, CAPEC-594: Traffic Injection, and CAPEC-194: Fake the Source of Data. This relationship is shown in Figure 3.

<<Attack>> blocks show how a scenario can be realized. By tracing the <<Threat>> block to <<Attack>> blocks, a threat modeler can provide some level of assurance that there are real patterns of attack that could be used to achieve the objective or effect laid out in the scenario. Using STRIDE as a basis for forming the threat scenarios helps to map to these CAPEC entries in following way. CAPEC can be organized by mechanisms of attack (such as “Engage in deceptive interactions”) or by Domains of attack (such as “hardware” or “supply chain”). The former method of organization aids the threat modeler in the initial search for finding the correct entries to map the threats to, based on the STRIDE categorization. This is not a one-to-one mapping as there are semantic differences; however, in general the following table shows the STRIDE threat type and the mechanism of attack that is likely to correspond.
STRIDE threat type | CAPEC Mechanism of Attack | |
Spoofing | Engage in Deceptive Interactions | |
Tampering | Manipulate Data Structures, Manipulate System Resources | |
Repudiation | Inject Unexpected Items | |
Information Disclosure | Collect and Analyze Information | |
Denial of Service | Abuse Existing Functionality | |
Elevation of Privilege | Subvert Access Control |
As previously noted, this is not a one-to-one mapping. For instance, the “Employ probabilistic techniques” and “Manipulate timing and state” mechanisms of attack are not represented here. Furthermore, there are STRIDE attack types that span multiple mechanisms of attack. This is not surprising given that CAPEC is not oriented around STRIDE.
Identifying Threat Modeling Mitigation Strategies and the Importance of Abstraction Levels
As shown in Figure 2, having identified the affected assets, information flows, processes and attacks, the next step in threat modeling is to identify mitigation strategies. We also show how the original threat scenario was able to be mapped to different attacks at different levels of abstraction and why standardizing on a single abstraction level provides benefits.
When dealing with specific issues, it is easy to be specific in applying mitigations. Another example is a laptop running macOS 15. The Apple macOS 15 STIG Manual states that, “The macOS system must limit SSHD to FIPS-compliant connections.” Furthermore, the manual says, “Operating systems using encryption must use FIPS-validated mechanisms for authenticating to cryptographic modules.” The manual then details test procedures to verify this for a system and what exact commands to run to fix the issue if it is not true. This is a very specific example of a system that is already built and deployed. The level of abstraction is very low, and all data flows and data stores down to the bit level are defined for SSHD on macOS 15. Threat modelers do not have that level of detail at early stages of the system development lifecycle.
Specific issues also are not always known even with a detailed design. Some software systems are small and easily replaceable or upgradable. In other contexts, such as in major defense systems or satellite systems, the ability to update, upgrade, or change the implementation is limited or difficult. This is where working on a higher abstraction level and focusing on design elements and information flows can eliminate broader classes of threats than can be eliminated by working with more detailed patches or configurations.
To return to the example shown in Figure 2, at the current level of system definition it is known that there will be a monitoring solution to aggregate, store, and report on collected monitoring and feedback information. However, will this solution be a commercial offering, a home-grown solution, or a mix? What specific technologies will be used? At this point in the system design, these details are not known. However, that does not mean that the threat cannot be modeled at a high level of abstraction to help inform requirements for the eventual monitoring solution.
CAPEC consists of three different levels of abstraction regarding attack patterns: Meta, Standard, and Detailed. Meta attack patterns are high level and do not include specific technology. This level is a good fit for our example. Standard attack patterns do call out some specific technologies and techniques. Detailed attack patterns give the full view of how a specific technology is attacked with a specific technique. This level of attack pattern would be more common in a solution architecture.
To identify mitigation strategies, we must first ensure our scenarios are normalized to some level of abstraction. The example scenario from above has issues in this regard. First the scenario is compound in that the threat actor has three different objectives (i.e., disrupt operations, create a diversion, and mask the attack). When attempting to trace mitigation strategies or requirements to this scenario, it may be difficult to see the clear linkage. The type of account may also impact the mitigations. It may be a requirement that a standard user account not be able to access log data whereas a service account may be permitted to have such access to do maintenance tasks. These complexities caused by the compound scenario are also illustrated by the tracing of the scenario to multiple CAPEC entries. These attacks represent unique sets of weaknesses, and all require different mitigation strategies.
To decompose the scenario, we can first split out the different types of accounts and then split on the different objectives. A full decomposition of these factors is shown in Figure 4.

This decomposition considers that different objectives generally are achieved through different means. If a threat actor simply wants to create a diversion, the weakness can be loud and ideally set off alarms or issues that the system’s operators will have to deal with. If instead the objective is to mask an attack, then the attacker may have to deploy quieter tactics when injecting data.
Figure 4 is not the only way to decompose the scenarios. The original scenario may be split into two based on the spoofing attack and the data injection attack (the latter falling into the tampering category under STRIDE). In the first scenario, a threat actor spoofs a legitimate account (CAPEC-194: Fake the Source of Data) to move laterally through the network. In the second scenario, a threat actor performs an argument injection (CAPEC-6: Argument Injection) into the monitoring system to disrupt operations.
Given the breakdown of our original scenario into the much more scope-limited sub-scenarios, we can now simplify the mapping by mapping those to at least one standard-level attack pattern that gives more detail to engineers to engineer in mitigations for the threats.
Now that we have the threat scenario broken down into more specific scenarios with a single objective, we can be more specific with our mapping of attacks to threat scenarios and mitigation strategies.
As noted previously, mitigation strategies, at a minimum, constrain design and, in most cases, can drive costs. Consequently, mitigations should be targeted to the specific components that will face a given threat. This is why decomposing threat scenarios is important. With an exact mapping between threat scenarios and proven attack patterns, one can either extract mitigation strategies directly from the attack pattern entries or focus on generating one’s own mitigation strategies for a minimally complete set of patterns.
Argument injection is an excellent example of an attack pattern in CAPEC that includes potential mitigations. This attack pattern includes two design mitigations and one implementation-specific mitigation. When threat modeling on a high level of abstraction, the design-focused mitigations will generally be more relevant to designers and architects.

Figure 5 shows how the two design mitigations trace to the threat that is realized by an attack. In this case the attack pattern we are mapping to had mitigations linked and laid out plainly. However, this does not mean mitigation strategies are limited to what is in the database. A good system engineer will tailor the applied mitigations for a specific system, environment, and threat actors. It should be noted in the same vein that attack elements need not come from CAPEC. We use CAPEC because it is a standard; however, if there is an attack not captured or not captured at the right level of detail, one can create one’s own attack elements in the model.
Bringing Credibility to Threat Modeling
The overarching goal of threat modeling is to help defend a system from attack. To that end, the real product that a threat model should produce is mitigation strategies for threats to the system elements, activities, and information flows. Leveraging a mixture of MBSE, UAF, the STRIDE methodology, and CAPEC can accomplish this goal. Whether operating on a high-level abstract architecture or with a more detailed system design, this method is flexible to accommodate the amount of information on hand and to allow threat modeling and mitigation to take place as early in the system design lifecycle as possible. Furthermore, by relying on an industry-standard set of attack patterns, this method brings credibility to the threat modeling process. This is accomplished through the traceability from an asset to the threat scenario and the real-world observed patterns used by adversaries to carry out the attack.
Additional Resources
To learn more about our work in threat modeling, please visit https://insights.sei.cmu.edu/blog/tags/threat-modeling/.
More By The Author
More In Cybersecurity Engineering
PUBLISHED IN
Cybersecurity EngineeringGet updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.
Subscribe Get our RSS feedMore In Cybersecurity Engineering
Get updates on our latest work.
Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.
Subscribe Get our RSS feed