icon-carat-right menu search cmu-wordmark

Using Quality Metrics and Security Methods to Predict Software Assurance

Headshot of Carol Woody. Nancy Mead
and

To ensure software will function as intended and is free of vulnerabilities (aka software assurance), software engineers must consider security early in the lifecycle, when the system is being designed and architected. Recent research on vulnerabilities supports this claim: Nearly half the weaknesses identified in the Common Weakness Enumeration (CWE) repository have been identified as design weaknesses. These weaknesses are introduced early in the lifecycle and cannot be patched away in later phases. They result from poor (or incomplete) security requirements, system designs, and architecture choices for which security has been given appropriate priority. Effective use of metrics and methods that apply systematic consideration for security risk can highlight gaps earlier in the lifecycle before the impact is felt and when the cost of addressing these gaps is less. This blog post explores the connection between measurement, methods for software assurance, and security. Specifically, we focus on three early lifecycle methods that have shown promise: the Software Assurance Framework (SAF), Security Quality Requirements Engineering (SQUARE) Methodology, and Security Engineering Risk Analysis (SERA) Framework.

Foundations of Our Work: Security Vulnerabilities and Quality Defects

Security is a key aspect to establishing software assurance. To go a step further, measuring software assurance involves assembling carefully chosen metrics that demonstrate a range of behaviors to establish confidence that the product functions as intended and is free of vulnerabilities. In 2014, I worked with a team of CERT Division researchers who established a connection between security vulnerabilities and quality defects, and we concluded that between one percent and five percent of defects should be considered vulnerabilities.

The average code developed in the United States has 0.75 defects per function point or 6,000 defects per million lines of code (MLOC) for a high-level language, such as Java or C#. Very good levels would be 600 to 1,000 defects per MLOC, and exceptional levels would be below 600 defects per MLOC. Some examples of what these metrics mean in practice include the following:

  • The Boeing 787 Dreamliner has 14 MLOC, and if we assume all of it is exceptional code, then 8,400 defects remain in the code and approximately 420 vulnerabilities. But more likely the code is average to very good, which could have up to 84,000 defects and 4,200 vulnerabilities.
  • The F-22 has 1.7 MLOC with a defect range of 1,020 - 10,200 and a range of vulnerabilities from 51 - 510.
  • The F-35 Lightning II has 24 MLOC which would be 14,400 - 144,000 defects and 720-7,200 vulnerabilities.

In reality because all these products are composed of code from a wide range of components, some of the delivered code may actually be of poor quality, making our defect and vulnerability numbers low. As software takes over more functionality, the realities of code quality must be considered.

The SEI has collected detailed size, defect, and process data for more than 300 software development projects that include a wide range of application domains and project sizes. Data from five high-quality development projects demonstrated excellent security- and safety-critical operational results, as well as improved reliability (all of which can be considered aspects of software assurance). These projects collected metrics that rigorously prove adherence to requirements and utilize security methods at each step in the lifecycle to monitor and predict product results.

First, we provide a brief overview of SAF, SQUARE, and SERA. SAF is ideal for detecting design-related issues that can lead to vulnerabilities because it encompasses the early phases of the software development lifecycle from initial planning activities through deployment.

Identifying Gaps by Baselining the Current Lifecycle: The Software Assurance Framework

Through a comparison of an organization's current software lifecycle practices to those in the framework, gaps can be identified and prioritized for planned improvement. CERT's Software Assurance Framework (SAF) outlines effective practices for acquiring and engineering software reliant-systems. SAF can be leveraged to identify opportunities for improvements in an organization's software development lifecycle from initial planning activities through deployment. These practices fall into four major areas:

  • process management
  • engineering
  • project management
  • support

Mapping the framework to current practice provides a way to model aspects of the organization's assurance ecosystem, as well as examine the gaps, barriers, and incentives that affect how to form, adopt, and use assurance solutions. At each point where a practice from SAF is inserted into the lifecycle, metrics to monitor the effectiveness of the practice should be collected.

Addressing Gaps in Threat Analysis: SERA

The Security Engineering Risk Analysis (SERA) Framework is a security risk-analysis approach developed by CERT. SERA reduces operational security risk by proactively designing security controls into software-reliant systems (i.e., building security in up front) rather than trying to incorporate them into a system once it has been deployed. SERA focuses on minimizing design weaknesses by linking two important technical perspectives: (1) system and software engineering and (2) operational security.

The SERA Framework comprises the following four tasks:

  • Task 1: Establish operational context--uses modeling techniques to define the operational context for the analysis
  • Task 2: Identify security risk--transforms security concerns into distinct, tangible security risk scenarios that can be described, analyzed, and measured
  • Task 3: Analyze security risk--evaluates each security risk scenario in relation to predefined criteria to determine its probability, impact, and risk exposure for prioritization.
  • Task 4: Develop control plan--maps security controls to high priority security risks to define ways to recognize, resist, and recover from undesirable impacts; assembles a matrix of recommended controls across multiple risk scenarios to identify high value controls

By applying SERA, an organization can structure security risk scenarios for a product or component that assists software engineers and architects in evaluating security requirements for completeness and effectiveness. SERA therefore provides a support for appropriately prioritizing security along with other features and functions. It also enables management and monitoring of security across the lifecycle using standard quality measurements.

Addressing Gaps in Requirements: Security Quality Requirements Engineering Methodology

Requirements engineering is a vital component in successful project development because it determines user expectations for new or modified products. In practice, however, requirements engineering often does not include sufficient attention to security concerns. Studies show that up-front attention to security can save the economy billions of dollars, yet security concerns are often treated as an afterthought to functional requirements.

In 2005, Nancy Mead and her colleagues introduced the Security Quality Requirements Engineering (SQUARE) Methodology, a series of nine steps that generate a final deliverable of categorized and prioritized security requirements. The SQUARE methodology provides a process and a structure for eliciting, categorizing, and prioritizing security requirements for systems and components.

SQUARE ensures that security requirements are

  • defined as system and software quality attributes
  • structured to integrate with the system's functional requirements
  • consistent with system and software requirements engineering and practices

SQUARE thus helps stakeholders with varying levels of security expertise focus not on what a system will do, but on how a system could break and what a system should not do, which is critical to the effective development of security requirements.

Applying the Metrics Throughout the Lifecycle: Linking Security and Quality

If security requirements are well specified, good quality will ensure proper implementation of these specified results. In particular, quality design reviews will identify missing requirements if appropriate security results are considered in the development of requirements and if requirements are effectively translated into detail designs and code specifications to support the required security results. Likewise, effective code checking will identify improper implementations of specifications. In general, metrics can be used to drive the lifecycle activities to the desired outcome by using SAF to integrate security practices into the lifecycle, SERA to characterize risk scenarios and prioritize security controls, and SQUARE to ensure security requirements are well formed and can integrate with other functional characteristics of the system.

Measuring the software assurance involves assembling carefully chosen metrics that establish confidence that the product functions as intended (meets its requirements) and is free of vulnerabilities (with high quality). Measures to provide assurance must, therefore, show how security is addressed within requirements, design, construction and test. This strategy is similar to COCOMO, which focuses on cost and schedule to determine whether a project can meet its targets based on available time and resources, but we broaden the focus to include security linked aspects using security requirements as the driver. As with COCOMO, this approach relies on data that should already be collected to avoid the risk of not having any data.

Metrics must evaluate the expected software assurance as a project is under way so problems can be identified and adjustments made to improve the end result. Through the development of effective security requirements (SQUARE) based on identified security risks (SERA) linked to the lifecycle through security practices (SAF), knowledge needed to interpret metrics to drive appropriate assurance is provided. For example, when considering risk in a system, the perception of risk drives assurance decisions. As detailed in Principals and Measurement Models for Software Assurance, organizations without effective software assurance can incorrectly perceive risk when they do not understand their threats and impacts.

Effective software assurance requires that risk knowledge be shared among all stakeholders and technology participants; however, too frequently, risk information is considered highly sensitive and is not shared, resulting in poor choices made by uninformed organizations. Key risks to measure and manage during development include the following:

  • variance from development cost estimates, which puts scope or schedule at risk
  • undiscovered defects injected during implementation, which lead to escaping vulnerabilities
  • failure in requirements or design to anticipate and mitigate various sorts of threat

Key measures used in our five high-quality projects included the following:

  • Incoming/week - the volume of new work and changes to be addressed in a time period
  • Triage rate - amount of time needed to evaluate each incoming item
  • Percentage closed -work addressed in a time period
  • Development work for cycle - work assigned to a time period
  • Software change requests per developer per week - production accuracy measure
  • Number of developers - number of available resources for a time period
  • Software change request per verifier and validator per week - volume of work planned to be verified and validated in a time period
  • Number of verifiers - number of resources handling verification for a time period.

Wrapping Up

To ensure more secure systems, acquisition professionals must measure the effectiveness of a developer's security metrics and strategies and enforcement policies. These conformance measures should provide

  • evidence that a developer applied risk management across the full development lifecycle as evidenced by quality evaluation processes
  • evidence that a developer's software assurance metrics have a predictive role (e.g., the results of assurance risk measures have guided design decisions or led to revisions in development practices)
  • an acquisition with predictive measures on the effectiveness of a supplier's risk management practices
  • an acquisition with predictive measures on the likely assurance risks, the possible consequences of those risks, and the effectiveness on costs associated with mitigations. Such measures can support tradeoff decisions.

Additional Resources

Learn more about the Security Engineering Risk Analysis Risk Analysis (SERA) Framework.

Learn more about the Security Quality Requirements Engineering (SQUARE) Methodology.

Read about our research in Predicting Software Assurance Using Quality and Reliability Measures.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed