search menu icon-carat-right cmu-wordmark

CERT/CC Comments on Standards and Guidelines to Enhance Software Supply Chain Security

My colleagues Art Manion, Eric Hatleback, Allen Householder, Laurie Tyzenhaus, and I had the opportunity to submit comments to the National Institute of Standards and Technology (NIST) in response to its Workshop and Call for Position Papers on Standards and Guidelines to Enhance Software Supply Chain Security. NIST is seeking positions related to executive order (EO) 14028, which was issued in May and aims to strengthen the country’s cybersecurity posture. It creates rather sweeping opportunities in policy, threat sharing, acquisition, operations, supply chain management, post-incident review, incident response, incident detection, vulnerability response, and vulnerability detection. In this specific workshop, NIST is seeking comment on its new authorities related to supply chain authority (all contained in section 4 of the EO.). Specifically, NIST requested positions in five areas:

  1. criteria for designating “critical software”
  2. initial list of secure software development lifecycle standards, best practices, and other guidelines acceptable for the development of software for purchase by the federal government
  3. guidelines outlining security measures that shall be applied to the federal government's use of critical software
  4. initial minimum requirements for testing software source code
  5. guidelines for software integrity chains and provenance

We submitted comments on each area. The comments are reproduced here.

1.Criteria for Designating Critical Software

Our comments on this area are in four parts.

1.1 Deployment Methods and Infrastructure

Federal systems are a combination of acquired software and services on diverse infrastructure including the following:

  • software acquired for deployment on government hardware (e.g., desktop operating systems)
  • software acquired for deployment on service provider hardware (e.g., hosted servers)
  • software used as part of a service purchased by the government (e.g., software-as-a-service)

Software security risks are posed by vulnerabilities rather than the method of deployment. Therefore, policy for designating critical software should be consistent regardless of deployment method. More concretely, minimize gaps between this policy effort and those applicable to FedRAMP systems.

1.2 Context is King

Software and its context of use are inseparable for the purposes of determining the “critical” designation. The designation should not be based only on proximate technical features of the software. Consider OpenSSH, which accepts untrusted network traffic, handles authentication, and relies on cryptography. These features map to the EO guidance about “...level of privilege or access required to function, ... direct access to networking and computing resources, [and] performance of a function critical to trust.” OpenSSH implies some degree of context by its design: the use case of secure remote access. However, the criticality of OpenSSH must be considered in context. A hobbyist web server hosting cat pictures and a nuclear power plant have different “...potential for harm if compromised,” even if both use OpenSSH.

1.3 A Complex Definition for a Complex Concept

A static dictionary entry will not adequately capture the complexity of the term “critical software.” Similarly, we do not expect that a master list of critical software will be effective. Instead, we suggest that NIST develop a mechanism to designate software as critical or not. This mechanism is the definition. Designators need a transparent, reliable, repeatable, and explainable mechanism.

Decision trees are best suited to meet these design requirements. The Stakeholder-Specific Vulnerability Categorization (SSVC) was designed for prioritization decisions during vulnerability management. We suggest an adaptation of the concepts in SSVC to account for the criteria in the EO. The methods developed in SSVC are appropriate because the EO is also making a prioritization decision. Table 1 maps guidance from the E.O. to SSVC features.

The decision tree in Figure 1 suggests how to designate critical software. We encourage NIST to treat software and its context together in a decision tree with appropriate features. The tree can and should be modified as necessary (See SSVC p. 36). Figure 1 uses the more coarse Public Safety Impact and Mission Prevalence ( For example, Mission Prevalence could be measured by the Core Infrastructure Initiative Census Program II). These are effective when the designator has coarse visibility into mission and safety context.

Screen Shot 2021-06-01 at 8.11.19 AM.png
Table 1: Mapping EO Criteria to SSVC Features
Figure 1: Suggested Decision Tree Definition of How to Designate “Critical” Software
Figure 1: Suggested Decision Tree Definition of How to Designate “Critical” Software

1.4 Who Designates?

In most cases, agencies have the expertise, knowledge, and resources to provide necessary context to the decision process. We suggest that agencies should have the authority and responsibility to use the definition to make the initial designation. For example, the “Categorize” step in the NIST Risk Management Framework outputs a security categorization of the system. NIST SP 800-60 provides guidance for mapping systems to security categories. This guidance should be updated to incorporate the “critical software” definition. An oversight function could monitor and review designations (both critical and non-critical). Such a function would have the necessary perspective to notice trends or otherwise make changes to initial designations.

As a general question for areas 2, 4, and 5, How will standards and guidelines apply to Free/Libre and Open Source Software (FLOSS)?

2. Secure Software Development Lifecycle Practices

These four supplier practices reduce the risks associated with software vulnerabilities and other security issues. We use “supplier” throughout as the entity providing a software component, system, or service to the government. A supplier may also be a developer, vendor, maintainer, manufacturer, integrator, or service provider. These practices can be observed externally, thus supporting “attestation of conformity.”

2.1 Coordinated Vulnerability Disclosure

Suppliers should practice coordinated vulnerability disclosure (CVD). Despite the application of security practices throughout the software development lifecycle (SDL), essentially all software is delivered with as-yet-unknown security vulnerabilities. Therefore, suppliers must be able to respond effectively when vulnerabilities are discovered. One form of CVD is a vulnerability disclosure program (VDP) as set out in §4.e.viii of the EO and required by CISA Binding Operational Directive 20-01. A functional CVD or VDP capability requires more than “...reporting and disclosure process[es];” it also includes triage, investigation, analysis, reproduction, fix development, and communication management.

2.2 Secure Updates

In many cases, the solution to a vulnerability in fielded software is an update (patch, hotfix, upgrade). Suppliers must be able to securely deliver updates to users. Assuring the authenticity and integrity of updates is critical. As demonstrated by incidents like SolarWinds and NotPetya, adversaries target software delivery and update mechanisms. Centralized update mechanisms also centralize risk. Suppliers and developers should proactively mitigate the risk associated with a single, centralized secure update mechanism. Suppliers should also enable users to control deployment of updates.

2.3 Supply Chain Transparency

Nearly all modern software systems depend on other software. Vulnerabilities in upstream dependencies are nearly impossible to identify and track without knowledge of the supply chain. Software composition analysis is a functional “after-the-fact” dependency detection method. A more efficient option is for suppliers to provide software bills of materials (SBOM). An SBOM is a list of software components and their dependencies. Government acquirers and users (including other suppliers) should require SBOMs from their suppliers.

2.4 End of Security Support

Old software typically accumulates known vulnerabilities that may be fixed in newer releases (See https://libyear.com/ and https://ericbouwers.github.io/papers/icse15.pdf). Security support may end without users realizing it. Suppliers should provide information about when software components and systems will no longer receive security updates. Either the end-of-support date or the amount of notice the supplier must give before support ends should be provided at the time of acquisition.

3. Federal Government Use of Critical Software

There is perhaps too much existing guidance for government administration and operation of software systems (NIST SPs 800-53, 800-61, 800-160, 800-161, 800-171; RMF; CMMC; CSF; FedRAMP; and FIPS to name a few). It is unclear if compliance is beneficial even if it is strictly followed. The federal executive branch must review and consolidate existing guidance. Consider material from NIST SPs 800-53 and 800-171 (possibly CMMC level 4) as minimum requirements for the use of critical software.

Incidents will occur, so agencies should have incident response capability, either in-house or on contract (We suggest guidance from NIST SP 800-61 and the FIRST CSIRT Services Framework). Some of this capability involves preparation and basic system administration practices that facilitate incident response. Suppliers sometimes lock users out of basic administrative access to products. License and contract terms may enforce lock-out. Lock-out may inhibit incident response by creating dependency on the supplier to create file system images, decrypt diagnostic reports, access the product remotely, or to perform other incident response tasks. Contracts that allow lock-out should require rapid incident response support, even when many users are responding to incidents at the same time.

Agencies should perform vulnerability management. Asset management is a prerequisite for vulnerability management (See also 2.3). Agencies should be able to notice new vulnerabilities, prioritize responses, and quickly apply updates or other mitigations (For the 4 percent of vulnerabilities analyzed that had a public exploit, the median time to public exploit availability was 2 days).

Agencies should carefully consider the benefits and risks of acquiring new software and enabling features. Operating less software and enabling fewer features reduces attack surface.

4. Testing software security

Grey-box fuzzing, which uses program instrumentation to compute code coverage and detect when new areas of the software are exercised, has been shown to be effective at both discovering security vulnerabilities and generating test corpora that exercise a broad range of software functionality. A variety of implementations have shown that grey-box fuzzing can be implemented at both the binary and source-code level with relatively low performance overhead.

Secure coding standards for a variety of programming languages exist, as do tools that analyze source code for defects that may cause vulnerabilities. Because different tools and techniques are better at identifying different types of defects, a consolidator tool should be used to merge results, collapse duplicates, and reduce reevaluation of previously reported findings. Examples include Code DX, ThreadFix, and SCALe.

5. Software integrity chains and provenance

Comprehensive knowledge of the composition of software systems (baseline SBOM as noted in 2.3) is necessary but insufficient to provide high assurance of software supply chains. Integrity and authentication (part of provenance) almost certainly require digital signatures and their accompanying infrastructure including strong identification of suppliers and other parties involved in the supply chains. See Deliver Uncompromised: Securing Critical Software Supply Chains.

SHARE

This post has been shared 1 times.

CITE
SHARE