When Is a Vulnerability a Safety Issue?
As you may have read in a previous post, the CERT/CC has been actively researching vulnerabilities in the connected vehicles. When we began our research, it became clear that in the realm of cyber-physical systems, safety is king. For regulators, manufacturers, and the consumer, we all want (and expect!) the same thing: a safe vehicle to drive. But what does safety mean in the context of security? This is the precisely the question that the National Highway Transit Safety Administration (NHTSA) asked the public in its federal register notice.
For those who may be unaware, NHTSA is the agency responsible for reducing accidents and increasing the safety of motor vehicles in the United States. As part of this authority, NHTSA has the ability to establish rules about what may be considered a safety issue and force manufacturers to recall a particular component if it is found to be deficient. In turn, manufacturers are responsible for reporting safety-critical issues to NHTSA within five business days or face an enforcement action from NHTSA (usually in the form of a large financial penalty).
But back to safety. What does the term actually mean? Our friends at the Oxford Dictionary tell us what safety means:
The condition of being protected from or unlikely to cause danger, risk, or injury
The word injury implies harm to a living thing. In the security world, we know the term risk very well.
Applied to motor vehicles, U.S. law tells us what safety means:
The United States Code for Motor Vehicle Safety (Title 49, Chapter 301) defines motor vehicle safety as "the performance of a motor vehicle or motor vehicle equipment in a way that protects the public against unreasonable risk of accidents occurring because of the design, construction, or performance of a motor vehicle, and against unreasonable risk of death or injury in an accident, and includes nonoperational safety of a motor vehicle."
In the context of a vehicle, safety means anything that protects the public (driver or non-driver) from an "unreasonable risk" of accidents occurring. Also, the use of "injury" implies harm to a human.
To NHTSA, if a vulnerability in the design, construction, or performance of a vehicle causes an unreasonable risk of accidents occurring, it should be recalled to protect the public. But how do you assess "unreasonable risk"? Thankfully the federal register notice tells us:
In general, for a defect to present an "unreasonable risk," there must be a likelihood that it will cause or be associated with a "non-negligible" number of crashes, injuries, or deaths in the future. See, e.g., Carburetors, 565 F.2d at 759.
OK, the defect has to cause a non-negligible number of crashes. So maybe more than, say, five? What does non-negligible mean? And for a software vulnerability, it might not cause any crashes (until it does). Unlike a physical component, which might predictably fail after n number of miles, a vulnerability may never be exploited. With an intelligent adversary the calculus changes and it becomes very hard to predict if and when a vulnerability may be exploited. In the absence of evidence, we have to look at the worst case scenario and assume that all vulnerabilities will be exploited.
In other words, where a defect presents a "clearly" or "potentially dangerous" hazard, and where "at least some such hazards"--even an "exceedingly small" number--will occur in the future, that defect is necessarily safety related.
NHTSA has established (in this request for comment) that a defect that may cause accidents in the future, even if it's just a remote chance, is considered to be safety related and therefore must be recalled. If you remember the Jeep hack from last year, Fiat Chrysler recalled all affected models because of the discovered vulnerability. It seems that this is why they had to do it, and the vehicle itself isn't the limit of the scope:
With respect to new and emerging technologies, NHTSA considers automated vehicle technologies, systems, and equipment to be motor vehicle equipment, whether they are offered to the public as part of a new motor vehicle (as original equipment) or as an after-market replacement(s) of or improvement(s) to original equipment. NHTSA also considers software (including, but not necessarily limited to, the programs, instructions, code, and data used to operate computers and related devices), and after-market software updates, to be motor vehicle equipment within the meaning of the Safety Act. Software that enables devices not located in or on the motor vehicle to connect to the motor vehicle or its systems could, in some circumstances, also be considered motor vehicle equipment. Accordingly, a manufacturer of new and emerging vehicle technologies and equipment, whether it is the supplier of the equipment or the manufacturer of a motor vehicle on which the equipment is installed, has an obligation to notify NHTSA of any and all safety-related defects.
When we read this scope, it floored us (in a good way). In our comments to NHTSA, we asked for further clarification for this scope, because it seems to imply that not only are the vehicle's on-board software and code open to being recalled, but so is its smartphone app, Bluetooth OBD-II devices, after-market head unit, and maybe even a cloud service! As security practitioners, trust boundaries help identify where a vulnerability might lie in a system. It was a good sign to us that this practice is also being applied here. In our comments we also asked how the agency plans to enforce this wide-ranging scope.
The Problem with Assessing Vulnerability Severity
Here at CERT/CC we look at numerous vulnerabilities each year and assess their impact score using an established metric, the Common Vulnerability Scoring System (CVSS). CVSS looks at the confidentiality, integrity, and availability impacts of individual vulnerabilities in an isolated context (meaning, not how they might impact the deployed network or system). Even though we do this hundreds of times, we still get it wrong. With limited knowledge of the technical underpinnings, it's hard to quickly assess the impact of a particular software vulnerability. Furthermore, the task becomes even more complicated when trying to understand the systemic effect of that vulnerability.
NHTSA proposes using severity assessments to determine whether a vulnerability poses an unreasonable risk. Their criteria are [paraphrased]
- time elapsed since vulnerability discovery
- attacker expertise
- system knowledge needed
- window of opportunity needed
- level of equipment needed
These factors are reasonable, but they still left us with some questions (and to astute readers who are familiar with aviation security, ARINC 811 comes to mind).
While 'time elapsed' and 'level of equipment' make sense, others require some sort of knowledge about the adversary before making a determination. Attacker expertise might be High (to use CVSS terminology), but what is the potential adversary that wants to target a fleet of vehicles? A nation-state? Do these factors change as more tools (or increased automation) become available? How are the vulnerabilities re-assessed if suddenly the system knowledge needed drops because telematics firmware has been leaked?
Fundamentally, we aren't confident that any of these factors have been proven to predict future exploitability, but they're not a bad start. How these factors are combined and work in practice should be carefully researched and discussed with the larger information security community.