SEI Insights

SEI Blog

The Latest Research in Software Engineering and Cybersecurity

Seven Principles for Software Assurance

Posted on by in

By Nancy Mead
SEI Fellow
CERT Division

This blog post is also co-authored by Carol Woody.

The exponential increase in cybercrime is a perfect example of how rapidly change is happening in cyberspace and why operational security is a critical need. In the 1990s, computer crime was usually nothing more than simple trespass. Twenty-five years later, computer crime has become a vast criminal enterprise with profits estimated at $1 trillion annually. One of the primary contributors to this astonishing success is the vulnerability of software to exploitation through defects. How pervasive is the problem of vulnerability? The average cost of a data breach is $4 million, up 29 percent since 2013, according to Ponemon Institute and IBM data. Ponemon also concluded that there's a 26-percent probability that an enterprise will be hit by one or more data breaches of 10,000 records over the next 2 years. Increased system complexity, pervasive interconnectivity, and widely distributed access have increased the challenges for building and acquiring operationally secure capabilities. This blog post introduces a set of seven principles that address the challenges of acquiring, building, deploying, and sustaining software systems to achieve a desired level of confidence for software assurance.

Lifecycle Assurance

The accelerating pace of attacks and the apparent tendency toward more vulnerabilities seem to suggest that the gap between attacks and data protection is widening as our ability to deal with attacks diminish. Much of the information protection in place today is based on principles established by Saltzer and Schroeder in "The Protection of Information in Computer Systems," which appeared in Communications of the ACM in 1974. They defined security as "techniques that control who may use or modify the computer or the information contained in it" and described the three main categories of concern: confidentiality, integrity, and availability.

As security attacks expanded to include malware, viruses, structured query language (SQL) injections, cross-site scripting, and other mechanisms, those threats changed the structure of software and how it performs. Focusing just on information protection proved vastly insufficient. Moreover, the role of software in systems expanded such that software now controls the majority of functionality, making the impact of a security failure more critical. Those working with deployed systems refer to this enhanced security need as cybersecurity assurance, and those in the areas of acquisition and development typically reference software assurance.

It is usually more feasible to achieve an acceptable risk level (although what that risk level might be remains somewhat obscure) than to feel confident that software is free from vulnerabilities. But how do you know how many vulnerabilities actually remain? In practice, you might continue looking for errors, weaknesses, and vulnerabilities until diminishing returns make it apparent that further testing does not pay. It is not always obvious, however, when you reach that point. This ambiguity is especially the case when testing for cyber security vulnerabilities, because software is delivered into many different contexts and the variety of cyber attacks is virtually limitless.

We are increasingly seeing the integration and interoperation of security-critical and safety-critical systems. It therefore makes sense to come up with an overarching definition of software assurance that covers both security and safety. In some ways, the different approaches suggested by the existing definitions result from risks related to modern systems of systems.

Further challenges to effective operational security come from increased use of commercial off the shelf (COTS) and open-source software as components within a system. The resulting operational systems integrate software from many sources, and each piece of software is assembled as a discrete product.

Shepherding a software-intensive system through project development to deployment is just the beginning of the saga. Sustainment (maintaining a deployed system over time as technology and operational needs change) is a confusing and multi-faceted challenge: Each discrete piece of a software-intensive system is enhanced and repaired independently and reintegrated for operational use. As today's systems increasingly rely on COTS software, the issues surrounding sustainment grow more complex. Ignoring these issues can undermine the stability, security, and longevity of systems in production.

The myth linked to systems built using COTS products is that commercial products are mature and stable and adhere to well-recognized industry standards. The reality indicates more of a Rube Goldberg mix of "glue code" that links the pieces and parts into a working structure. Changing any one of the components--a common occurrence since vendors provide security updates on their own schedules--can trigger a complete restructuring to return the pieces to a working whole. This same type of sustainment challenge for accommodating system updates appears for system components built to function as common services in an enterprise environment.

Systems cannot be constructed to eliminate security risk but must incorporate capabilities to recognize, resist, and recover from attacks. Initial acquisition and design must prepare the system for implementation and sustainment. As a result, assurance must be planned across the lifecycle to ensure effective operational security over time.

Here we use the following definition of software assurance developed to incorporate lifecycle assurance:

Application of technologies and processes to achieve a required level of confidence that software systems and services function in the intended manner, are free from accidental or intentional vulnerabilities, provide security capabilities appropriate to the threat environment, and recover from intrusions and failures.

In the context of this definition of software assurance, the remainder of this post will detail seven principles that will help security and software professionals create a comprehensive lifecycle process for system and software security. A more comprehensive lifecycle process allows organizations to incorporate widely accepted and well-defined assurance approaches into their own specific methods for ensuring the operational security of their software and system assets.

Seven Principles for Software Assurance

In 1974, Saltzer and Schroeder proposed a set of software design principles that focus on protection mechanisms to "guide the design and contribute to an implementation without security flaws." Students still learn these principles in today's classrooms, but these principles are no longer sufficient, as we will explain below.

These principles were developed at a time when systems operated largely within the confines of isolated mainframes. Time has shown the value and utility in these principles, but new challenges surfaced soon after Saltzer and Schroeder proposed them. With the growth of interoperability and the massive connectivity of the Internet, the impact of the context in which a system operates has grown in importance. The Morris worm generated a massive denial of service by infecting more than 6,000 UNIX machines on November 2, 1988. Systems must now contend with malware, viruses, and external attacks on a constant basis. Although these principles still apply to security within an individual piece of technology, they are no longer sufficient to address the complexity and sophistication of the environment within which that component must operate.

We propose a set of seven principles focused on addressing the challenges of acquiring, building, deploying, and sustaining systems to achieve a desired level of confidence for software assurance:

  1. Risk drives assurance decisions. A perception of risk drives assurance decisions. Organizations without effective software assurance perceive risks based on successful attacks to software and systems, and thus their response is reactive rather than proactive. They may implement assurance choices, such as policies, practices, tools, and restrictions, based on their perception of the threat of a similar attack and the expected impact if that threat is realized. Organizations can incorrectly perceive risk when they do not understand their threats and impacts. Effective software assurance requires organizations to share risk knowledge among all stakeholders and technology participants. Too frequently, organizations consider risk information highly sensitive and do not share it; protecting the information in this way results in uninformed organizations making poor risk choices.
  2. Risk concerns shall be aligned across all stakeholders and all interconnected technology elements. Highly connected systems, such as the Internet, require aligning risk across all stakeholders and all interconnected technology elements; otherwise, critical threats are missed or ignored at different points in the interactions. It is not sufficient to consider only highly critical components when everything is highly interconnected. Interactions occur at many technology levels (e.g., network, security appliances, architecture, applications, and data storage) and are supported by a wide range of roles. Protections can be applied at each of these points and may conflict if not well orchestrated. Due to interactions, effective assurance requires that all levels and roles consistently recognize and respond to risk.
  3. Dependencies shall not be trusted until proven trustworthy. Due to the widespread use of supply chains for software, assurance of an integrated product depends on other people's assurance decisions and the level of trust placed on these dependencies. The integrated software inherits all of the assurance limitations of each interacting component. In addition, unless specific restrictions and controls are in place, every operational component, including infrastructure, security software, and other applications, depends on the assurance of every other component. There is a risk each time an organization depends on others' assurance decisions. Organizations must therefore decide how much trust they place in dependencies based on a realistic assessment of the threats, impacts, and opportunities represented by various interactions. Dependencies are not static, and organizations must regularly review trust relationships to identify changes that warrant reconsideration. The following examples describe assurance losses resulting from dependencies:

    - Defects in standardized pieces of infrastructure (such as operating systems, development platforms, firewalls, and routers) can serve as widely available threat entry points for applications.

    - Using many standardized software tools to build technology establishes a dependency for the assurance of the resulting software product. Vulnerabilities can be introduced into software products by the tool builders.

  4. Attacks shall be expected. A broad community of attackers with growing technology capabilities is able to compromise the confidentiality, integrity, and availability of an organization's technology assets. There are no perfect protections against attacks, and the attacker profile is constantly changing. Attackers use technology, processes, standards, and practices to craft a compromise (known as a socio-technical response). Some attacks take advantage of the ways we normally use technology, and others create exceptional situations to circumvent defenses.
  5. Assurance requires effective coordination among all technology participants. Organizations must apply protection broadly across their people, processes, and technology because attackers take advantage of all possible entry points. Organizations must also clearly establish authority and responsibility for assurance at an appropriate level to ensure that members of the organizations effectively participate in software assurance. This principle assumes that all participants know about assurance, but that is not usually the case. Organizations must therefore educate people on software assurance.
  6. Assurance shall be well planned and dynamic. Assurance must represent a balance among governance, construction, and operation of software and systems and is highly sensitive to changes in each of these areas. Maintaining this balance requires an adaptive response to constant changes in applications, interconnections, operational usage, and threats. Since change is frequent, this is not a once-and-done activity. It must continue beyond the initial operational implementation through operational sustainment. This cannot be added later; the systems and software must be built to the level of acceptable assurance that organizations need. No one has resources to redesign systems every time the threats change.
  7. A means to measure and audit overall assurance should be built in. Organizations cannot manage what they do not measure, and stakeholders and technology users do not address assurance unless they are held accountable for it. Assurance does not compete successfully with other competing needs unless results are monitored and measured. All elements of the socio-technical environment, including practices, processes, and procedures, must be tied together to evaluate operational assurance. Organizations with more successful assurance measures react and recover faster. They learn from their reactive responses and those of others, and they are more vigilant in anticipating and detecting attacks. Defects per lines of code is a common development measure that may be useful for code quality, but is not sufficient evidence for overall assurance because it provides no perspective on how that code behaves in an operational context. Organizations must take focused and systemic measures to ensure that the components are engineered with sound security and that the interaction among components establishes effective assurance.

In our soon-to-be-published book, Cyber Security Engineering, we demonstrate how to apply these seven core principles of software assurance to four key areas of cyber security engineering:

  • security and software assurance engineering
  • security and software assurance management
  • security and software assurance measurement and analysis
  • software assurance education and competencies

Wrapping Up and Looking Ahead

Effective cyber security engineering requires the integration of security into the software acquisition and development lifecycle. For engineering to address security effectively, requirements that establish the target goal for security must be in place. Risk management must include identification of possible threats and vulnerabilities within the system, along with the ways to accept or address them. There will always be cyber security risk, but engineers, managers, and organizations must be able to plan for the ways in which a system should avoid as well as recognize, resist, and recover from an attack.

Mechanisms that ensure correctness and compliance can be excellent tools if they are applied by teams and individuals with cyber security expertise and linked to appropriate and complete cyber security requirements.

This blog entry has been adapted from chapter one of our forthcoming book Cyber Security Engineering: A Practical Approach for Systems and Software Assurance, which will be published in November, 2016, by Pearson Education, InformIT as part of the SEI Series in Software Engineering.

Additional Resources

The forthcoming book Cyber Security Engineering: A Practical Approach for Systems and Software Assurance, will be published in November, 2016 as part of the SEI Series in Software Engineering.

About the Author



We welcome comments with a wide range of opinions and views. To keep the conversation focused on topic, we reserve the right to moderate comments.

Add a Comment


Type the characters you see in the picture above.