3 Activities for Making Software Secure by Design
PUBLISHED IN
Cybersecurity EngineeringCriminals and foreign state actors have increasingly targeted our personal data and critical infrastructure services. Their disruption is enabled through vulnerabilities in software whose design and build are inadequate for effective cybersecurity. Most software creators and vendors prioritize speed of release to capture customers quickly with new features and functions, then fall back on a never-ending cycle of post-release patches and “updates” to handle issues such as security. Meanwhile, our data, our homes, our economy, and our safety are increasingly left open to attacks.
Automation and interconnection among software systems make software risks hard to isolate, increasing the value of each vulnerability to an attacker. Moreover, the sources of vulnerabilities are increasingly complex and spreading due to an ever-growing supply chain of software components within any product. After code originators are compelled to make a fix, it must trickle into the products that use their software for the security repairs to become effective, which is a time-consuming and frequently incomplete process. Many vulnerabilities remain unrepaired, leaving risk exposure long after a fix is available. Users will not be aware of the risk unless they are closely monitoring their supply chains, but supply chain information is rarely available to users.
Commercial systems and software, including open source software, are becoming further interwoven into the systems that control and support our national defense, national security, and critical infrastructure. Their use and reuse reduces costs and speeds delivery, but their growing vulnerabilities are especially dangerous in these high-risk domains.
To protect national security, critical infrastructure, and the way we live our lives, the software community must start producing software that is secure by design. To accomplish this shift, the creators, acquirers, and integrators of software and software systems need to change their mindset, education, training, and prioritization of software quality, reliability, and safety. In this blog post, we will look at some key secure-by-design principles, roadblocks, and accelerators.
A National Problem
In remarks at Carnegie Mellon University this February, Jen Easterly, director of the Cybersecurity and Infrastructure Security Agency (CISA), noted that frequent cyber attacks by criminals and adversary nations are a symptom of “dangerous-by-design” software. She said the responsibility for software safety should rest with developers and vendors, who should deliver software that is safe rather than expect users to protect themselves.
This idea underpins the 2023 White House Cybersecurity Strategy. It calls for a rebalancing of the responsibility for cyberspace defense away from end users and toward “the owners and operators of the systems that hold our data and make our society function, as well as of the technology providers that build and service these systems.”
The highest levels of the U.S. government are now talking about software security, though many in high-risk areas, such as the Department of Defense and critical infrastructure, have long recognized the problem. It is the same issue we have been researching for decades in the CERT Division of the SEI. In our work with government and industry software developers and acquisitions programs, we have advocated for software security to be incorporated earlier in—and throughout—the software development lifecycle.
Effective Security Requires Good Design Choices
Making software secure by design has an important role in mitigating this growing risk. Bolting security onto the end of software development does not work and is quite costly and fragile. At that point in the lifecycle, it is too late and costly to course-correct design vulnerabilities, create and apply supply chain corrections, and correct vulnerabilities in the tools used to build the system. Weaknesses that are introduced while making design decisions have significantly greater impact, risk, and cost to fix later in the lifecycle once implementation reveals the system’s many dependencies. Trying to address security issues late in the lifecycle usually requires shortcuts that are insufficient, and the risk is not recognized until after attackers are exploiting the system. Secure software by design takes engineering approaches for security from start to finish—throughout the lifecycle—to produce a more robust, holistically secure system.
Security must become a design priority. Each element of functionality must be designed and built to provide effective security qualities. There is no one activity that will accomplish this goal. Secure by design largely means performing more security and assurance activities starting earlier and continuing more effectively throughout the product and system lifecycle.
Instead of waiting to address potential vulnerabilities until system testing or even after release, as we see today, engineers and developers must integrate security considerations into the requirements, design, and development activities. Experts on the ways software can be exploited must be part of the teams addressing these activities to identify attack opportunities early enough for mitigations to be included. Designers understand how to make systems work as intended. A different perspective is required, however, to understand how one can manipulate a system and its components (e.g., hardware, software, and firmware) in unexpected ways to allow attackers to access and change data that should be confidential and execute tasks that should be prohibited to them.
The cyber landscape is always changing, in part because the way we make software is, too. Demands for cheaper, quickly made new features and functions, coupled with gaps in availability of technology expertise to build systems, are driving many of these changes. Several facets of current system design increase the potential for operational security risk:
- Functionality shift from hardware to software. Though software now handles the great majority of computing functionality, we find that many organizations designing and building systems today still do not account for the need to sustain, update, and upgrade software because software does not break down in the same way as hardware.
- Interconnectedness of systems. Expanded use of cloud services and shared services, such as authentication and authorization, connect many systems not originally built for these connections. As a result, a vulnerability or defect in one system can threaten the whole. Organizations might ignore this risk if their focus does not extend beyond critical components.
- Automation. As organizations increasingly adopt approaches such as DevSecOps, reliance on automation in the software factory pipeline expands the layers of software that can impact operational code. Each of these layers contains vulnerabilities that can pose risks to the code under development and the resulting system.
- Supply chain dependencies. System functionality is increasingly handled by third-party components and services. Compromises to these components and delivery mechanisms can have far-reaching impact across many systems and organizations. Designers must consider means to recognize, resist, and recover from these compromises.
There will always be some risk. Just as no system is defect free, no system can implement perfect security. In addition, tradeoffs among needed qualities such as security, safety, and performance will result in a solution that does not maximize any individual quality. Risk considerations must be part of these choices. For example, when the potential for attacker exposure is high because of use of a third-party service, response time may need to be a bit slower to allow for added encryption and authorization steps. Inherited risk in a shared network could allow an attacker to compromise a safety-critical element, requiring added mitigations. Designers need to consider these choices carefully to ensure cybersecurity is sufficient.
3 Activities for Making Software Secure by Design
Current efforts to build secure code and apply security controls for risk mitigation are useful, but not sufficient, to address the cybersecurity challenges of today’s technology. Decisions made in functional design and engineering can carry security risks. The later that security is considered, the greater the potential for costly mitigations, since redesign may be required. Sometimes programs stop looking for defects once they run out of time to fix them, passing on unknown residual risks to users. Security experts could review system design and mandate redesigns before granting approval to proceed with implementing the system. Developers need to identify and address vulnerabilities as they build and unit test their code, since delays can increase impacts to cost and schedule.
Creators and vendors of technology need to integrate security risk management into their standard way of designing and engineering systems. Security risk must be considered for the range of technology assembled into the system: software, hardware, firmware, reused components, and services. Change is a constant for each system, so organizations must expand beyond verification of security controls for each system at the implementation, acceptance, and deployment phases. Instead, they must design and engineer each system for effective, ongoing monitoring and management of security risk to know when potential unacceptable risks arise. Security risk considerations must be integrated throughout the lifecycle processes, which takes effective planning, tooling, and monitoring and measuring.
Planning
A cybersecurity strategy and program protection plan should establish the constraints for designers and engineers to make risk-informed choices among competing qualities, technology options, service options, and so on. Too frequently we see security requirements (along with safety, performance, and other quality attributes) defined as meeting general standards and not specified for the actual system to be implemented. Just providing a list of system controls is grossly insufficient—the purpose for each control must be connected to the system design and implementation decisions to ensure changes in design and system use do not provide opportunities to bypass critical controls.
Organizations should start planning their cybersecurity strategy by answering basic questions to define the required extent of security.
- What would be unacceptable security risks to the mission and operations of the system? What potential impacts must be avoided, and what analysis is planned to ensure that security risks, as well as safety concerns, could not trigger such an impact?
- Is the system working with highly sensitive data that requires special protections? What analysis is planned to ensure that any access to that data, such as copying it to a laptop, maintains appropriate protections?
- What data management is planned to ensure that old data is purged? Managing data as an actual asset involves more than collecting, organizing, and storing it—it also requires knowing when to retain or dispose of it.
- What levels of trust are required for interaction among system components, other systems, and system users? What controls will be included to establish and enforce the levels of trust, and what analysis is planned to ensure controls cannot be bypassed at implementation and in the future?
- What misuse and abuse cases will the system be designed to handle? Who will identify them, and how will sufficiency of those cases be confirmed?
- Processes and practices for handling vulnerabilities need to be in place, and planning must include prioritization to ensure critical risks are identified and addressed. What analysis and implementation gates are planned to ensure unacceptable risk cannot be implemented? Too frequently we see vulnerabilities identified but not addressed, because the volume can be overwhelming. What processes and practices will be implemented to handle the volume effectively?
- What parameters for security risk will be included in how third-party capabilities are selected? What analyses will be in place to ensure planned criteria are met?
These considerations will help the organization benchmark security with the requirements for other qualities, such as performance, safety, maintainability, recoverability, and reliability.
Tooling
Modern software systems represent an enormous interface activity and environment. The growth of software-reliant systems has exploded the volume of code that must be built, reused, and maintained. The sheer volume will require automation at many levels. Automation can remove repetitive tasks from overloaded developers, testers, and verifiers and increase the consistency of performance across a wide range of activities. But automation can also hide poor processes and practices that are not well implemented or were not adjusted to keep up with changing system and vulnerability needs. The SolarWinds attack is an example of just such a situation. The automation tools themselves must be evaluated for security, adding another layer of complexity to address the new dimension of risk.
Modern systems are too complex and dynamic to implement as a whole and remain untouched for any length of time. Agile and incremental development extends the coupling of the development environment with the operational environment of a system, increasing the system’s attack surface. Increased use of third-party tools and services further expands the attack surface into inherited environments that are out of the direct control of the system owners.
When selecting the tools for both the development and operational environments, organizations must account for the system risks as well as the expectations for scale. To develop proficiency with a tool, developers and testers require some level of training and hands-on time. Constantly changing tools can lead to gaps in security as problems go unrecognized in the churn of activity to shift environments.
Organizations should ask the following questions about tooling:
- What capabilities do the participants in my environment need, and what tools work best to meet those needs? Do the tools operate at the scale needed and at the security levels required to minimize system risk?
- What mitigation capabilities and approaches should be used to identify and manage vulnerabilities in the range of technologies and tools to be used in the system lifecycle?
- Does the range of selected vulnerability management tools address the expected vulnerability needs of the technologies that put the system at risk? How will this selection be monitored over time to ensure continued effectiveness?
- What scale of tool usage can be expected, and have preparations been made for tool licenses and information handling to deal with this scale?
- For cost effectiveness, are tools used as close as possible to the point of vulnerability creation? Once identified, are the vulnerabilities prioritized, and is sufficient resource time provided to address removal or mitigation as appropriate?
- How will developers, testers, verifiers, and other tool users be trained to apply the tools correctly and effectively? Most lifecycle tools are not designed and built to be used effectively without some level of training.
- What prioritization mechanisms will be used for vulnerabilities, and how will these be applied consistently across the various tools, development pipelines, and operational environments in use?
- What monitoring will be in place to ensure unacceptable risk is consistently addressed?
Many organizations segregate tool selection and management from the tool users to allow the developers and designers to focus on their creative tasks. However, poorly selected tools that are poorly implemented can frustrate these resources that are most important to effective system development and maintenance. Even good tools that are not well applied by poorly trained users can fall extremely short of expectations. These situations can motivate the use of unapproved tools, libraries, and practices that can result in increased security risk.
Monitoring and Measuring
Even the best planning and tooling will not guarantee success. Results must be compared to expectations to confirm the appropriateness of the preparation. For example, are tests showing reductions in vulnerabilities that tools were selected to identify? Systems, processes, and practices—for both the operational and development environments—must be designed and structured to be monitored with an emphasis on security risk management throughout the lifecycle. Without planning for analysis and measurement of the feedback, the collection and reporting of information that would signal potential security risk will likely be scattered across many logs and hidden in obscure error reports, at best.
Operational performance considerations and desired release schedules have motivated removal of monitoring activities in the past, eliminating visibility of abnormal behavior. Organizations must recognize that continuous review is an important role for successful cybersecurity, and the capabilities to do so must be prepared as part of secure by design. If security controls are not monitored for continued effectiveness, they can deteriorate over time as systems change and grow.
Risks accepted from the development and third-party sources of components and services cannot be ignored since there is a potential for operational impact when system conditions and use change. Preparation for these risk monitoring and measuring needs must begin at system design.
Security analysts and system designers must
- assemble information about possible security risks based on analysis of a system design
- identify potential measures that would indicate such risks
- identify ways the measures can be implemented effectively within the system design
Current approaches to security analysis typically do not include this level of analysis and will need to be augmented. Designs that focus only on delivering the primary functionality without effective ongoing cybersecurity are insufficient for the operational realities of today.
Secure by Design Takes Training and Expertise
The role of security must expand beyond confirming that selected system controls are in place at implementation. Requirements must characterize how the system should function and how it should handle misuse and abuse situations. Those deciding to integrate legacy capabilities, as well as third-party tools, software, and services, must consider the potential vulnerabilities each of these brings into the system and what risks they represent. When creating new code, developers must use a development environment and practices that encourage timely vulnerability identification and removal.
Making systems and software secure by design demands change. Security is not an activity or a state, but continuous evolution. Those designing systems and software must integrate effective approaches for designing security into systems early and throughout the lifecycle. As system functionality and use changes, security must be adjusted to accommodate the new risks brought on by new capabilities. Leadership must prioritize integrating effective security risk management across the lifecycle.
All these activities require an uncommon breadth of knowledge. People performing the processes and practices must understand security risk management, how to identify what is appropriate and inappropriate for their assigned activities, and the mechanisms that provide access to potential risks and mitigation capabilities for expected risks.
Recognition of a security risk starts with understanding what can go wrong in different parts of a system and how that can pose a risk to the whole. This skill set is not currently taught in much of technology education at any level. For example, we see many engineers focused only on hardware because they consider software a support capability for hardware. Their experience and training have not included the reliability and vulnerability challenges particular to software. Developing a level of understanding of security risks in all of a system’s technology will be critical to moving forward and addressing the critical need for secure by design.
Additional Resources
Read the SEI blog post Software Security in Rust.
Listen to the SEI podcasts Key Steps to Integrate Secure by Design into Acquisition and Development and Secure by Design, Secure by Default.
Read the SEI technical report Using Model-Based Systems Engineering (MBSE) to Assure a DevSecOps Pipeline is Sufficiently Secure.
More By The Authors
Measurement Challenges in Software Assurance and Supply Chain Risk Management
• By Nancy R. Mead, Carol Woody, Scott Hissam
More In Cybersecurity Engineering
PUBLISHED IN
Cybersecurity EngineeringGet updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.
Subscribe Get our RSS feedMore In Cybersecurity Engineering
Get updates on our latest work.
Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.
Subscribe Get our RSS feed