SEI Insights

CERT/CC Blog

Vulnerability Insights

CVSS and the Internet of Things

Posted on by in

There has been a lot of press recently about security in Internet of Things (IoT) devices and other non-traditional computing environments. Many of the most talked about presentations at this year's Black Hat and DefCon events were about hacking IoT devices. At the CERT/CC, we coordinate information about and discover vulnerabilities in various devices, and the number of vulnerabilities keeps growing.

One thing that I've personally been researching is finding vulnerabilities in vehicles. In recent weeks, even non-technical friends and family have asked me about the Jeep vulnerability, the Mobile Devices C4, Rolljam, Tesla, and other recent car-related vulnerabilities. These attacks are novel not because of the technical details, but because of the attack vectors and impact, which differ dramatically from those in traditional IT resources.

For friends and family, understanding that attacks and vulnerabilities for IoT devices differ from those on traditional infrastructures is sufficient. For information security professionals, however, there is a need to be able to codify and measure these differences. I have been thinking about the Common Vulnerability Scoring System (CVSS), our traditional way of scoring vulnerabilities, and how well it applies to vulnerabilities in IoT devices.

In this blog post, I walk through scoring a few real and hypothetical vulnerabilities in both CVSS version 2 and CVSS version 3 to evaluate how useful the results are. Estimating the risk to a car or non-traditional computing device was not one of the design considerations for CVSS, and may never be. The goal of this blog post is to evaluate and stimulate discussion, since there are no better alternatives to CVSS.

Real-World Example

The CERT/CC recently released a Vulnerability Note on the Mobile Devices C4 device, and provided a CVSS 2.0 score for one of the vulnerabilities. (We have not formally moved to CVSS 3.0 yet.) The Mobile Devices C4 is an OBDII dongle, which means it plugs into the OBDII port that is mandated to be in every car sold in the U.S. These devices essentially act as bridges between an external network (in this case, a cellular network) and the car's internal network. Because it has a cellular data connection, this device is accessible from the Internet and, if not properly secured, could allow attackers access to the computers that run your car.

The Mobile Devices C4 does not authenticate its updates (CVE-2015-2908), allowing an attacker to install malicious firmware. Obviously, malicious software could include a network-accessible backdoor or another functionality the hardware supports. The CERT/CC vulnerability note puts the base CVSS score at 9.0 out of 10, which confirms our gut feeling that a remotely exploitable vulnerability in this device is severe; it is remotely exploitable, easy to exploit, and completely compromises both the device and potentially the car. For comparison, I also scored the vulnerability using CVSS 3.0, yielding a score of 9.0 with vectors CVSS:3.0/AV:N/AC:L/PR:L/UI:R/S:C/C:H/I:H/A:H.

Hypothetical Example

Let's try a hypothetical vulnerability on a similar OBDII dongle that uses WiFi instead of cellular. The intended use is for your cell phone to communicate with the dongle over WiFi, and the dongle can use the phone's Internet connection if necessary. The device relays commands it receives over WiFi to the internal networks in the vehicle and vice versa.

The actual device I have uses a decent WiFi pre-shared key for access. Each device has a unique key consisting of 14 characters from [a-z 0-9]. For our hypothetical vulnerability, though, let's assume that all the devices shared the same password, which is "password." This password is definitely a vulnerability, so let's try to score it.

First, let's calculate a CVSS 2.0 base score.

Access Vector: The dongle's WiFi network is not Internet accessible, so the attacker would need to be within WiFi range to connect. On the other hand, it's not really a local vulnerability, since it allows an attacker to create a network connection. The closest option is "Adjacent Network," which roughly means "on the same Layer 2 segment," but here we can use it to indicate physical proximity. It's misleading though because a radio in a car requires a physical proximity ranging from a few feet (key fobs) to thousands of miles (satellite radio). AV:A

Access Complexity: Presumably, someone who can't even crack the pre-shared key could just Google the default password, so we're going with Low. AC:L

Authentication: The vulnerability is that the authentication is so rudimentary as to be pointless, so we'll call it None. AU:N

Confidentiality Impact: Impacts are another place where we had trouble. This vulnerability allows access to any data that is accepted by the dongle, including broadcast traffic from the CAN bus. However, it doesn't allow access to data stored on the dongle (the firmware.) Several members of the CERT/CC team debated whether scoring this as Complete required the ability to view the firmware, which is basically the operating system. We didn't totally come to a consensus, but we'll go with Partial. Also note that CVSS 2.0 requires scoring the affected device itself, while CVSS 3.0 adds the Scope score (see below.) C:P

Integrity Impact: We can spoof any CAN message we want, but again, we cannot affect the integrity of the firmware. I:P

Availability Impact: We can effectively DoS the dongle by flooding it with invalid packets, so this is Complete. A:C

This analysis gives us a numerical score of 7.3 and vectors of CVSS:2.0/AV:A/AC:L/AU:N/C:P/I:P/A:C.

Now let's do it with CVSS 3.0.

Access Vector: Version 3.0 introduces an additional option, Physical, but it's intended to indicate the need to actually touch the device, not account for mere proximity. We'll have to stick with Adjacent. AV:A

Attack Complexity: Just renamed Access Complexity, this vector now reduces the possible values to High or Low. Still Low. AC:L

Privileges Required: Basically, this vector is the same as Authentication. Still None. PR:N

User Interaction: This vector differentiates attacks, like cross-site scripting and phishing emails, from traditional network attacks. No User Interaction is required, so None. UI:N

Scope: We debated this new vector internally while CVSS 3.0 was being developed. It means "Does the scope of control the attacker gains remain restricted to the device or app the vulnerability is present in?" In this case, no. While the vulnerability is in the OBDII dongle, the scope of control is the entire car. It's clearly Changed. S:C

Confidentiality, Integrity and Availability: This vector remains the same, although the values have been renamed from {None, Partial, Complete} to {None, Low, High}.

When scored under CVSS 3.0, our vulnerability gets a whopping 8.8! The vectors are CVSS:30./AV:A/AC:L/PR:N/UI:N/S:C/C:L/I:L/A:H.

This score would lead you to believe that this vulnerability is very important. For comparison, Heartbleed had a 5.0 (5.8 in CVSS 3.0) but Shellshock scored a perfect 10.

Is this vulnerability really that big a deal? Well, if you have one in your car and someone wants to target your car and can get within a few hundred yards while it's running, then yes, it could be a big deal. The attacker could steal the car, steal the contents of the car, steal data stored in it (i.e., contacts or GPS history), eavesdrop on you, or even cause you to crash and possibly die. Of course, even if you don't have one of these devices, there are plenty of physical ways to rob or kill you.

Conclusions

This analysis leads us to two conclusions:

First, you really shouldn't use just numeric scores from CVSS to compare vulnerabilities without context. You should always examine the individual metrics to better understand the possible impact. You should also include the Temporal and Environmental metrics, which we've omitted here for brevity. In particular, CVSS 2.0 Environmental metrics include "Collateral Damage," which can reflect the possibility of harm to humans. However, when we did a "back of the envelope" score that included Environmental, we found that Collateral Damage was more than offset by Target Distribution.

Second, CVSS is likely to be much less relevant for IoT or cyber-physical systems (or whatever you want to call them). I see several problems with using CVSS in this case, and you may see others. The following is based on CVSS 3.0:

  1. Access Vector. This vector isn't a good fit once you're talking about physical proximity. There is a huge difference between being able to attack something from anywhere on the Internet and having to get within kilometers or even meters of it. On the other hand, unlike "Adjacent Network," there's no real barrier to someone who is in physical proximity. Your car is intended to be out "in the wild," not locked in a server cage. There's also the question of speed. Our attack probably only works while the car is on, and if it's on, it's probably moving. How long will the attacker be in range?
  2. Attack Complexity. Logging on to a WiFi network with the password "password" is not complex. Reverse engineering a car's internal systems is very, very complex. There are hundreds of electronic control units (ECUs) running highly customized code and proprietary protocols. You think it's a pain to get an x86 exploit to run on a 64 bit processor? Try getting a Honda exploit to run on a Ford. Canned exploits will still be a thing, but the complexity goes up substantially to do anything useful.
  3. Scope. This vector is a great addition to CVSS 3.0. This attack is much worse because it can affect the car itself (and the occupants) rather than just the dongle. However, it doesn't seem weighted heavily enough in this case: Changing Scope to Unchanged would only drop the CVSS 3.0 score from 9.6 to 8.3.
  4. Confidentiality, Integrity, and Availability. This vector is probably the most problematic. Even if we expand it using the Parkerian Hexad--CIA plus Control (or Possession), Authenticity, and Utility--it doesn't account for the most serious potential impact, which is that people could be injured or killed. Call it Safety. As a metric, Safety is harder to predict and more variable than other impact metrics. It's hard to compare to the other metrics because it's difficult to rate the importance of even a single human life against, say, the confidentiality of a corporation's emails. It varies more than other metrics if we include Temporal and Environmental CVSS metrics. A 2016 car with the latest safety features, crashing at 30mph, is generally less of a risk than a 1997 car going 70.

With all of these concerns, do we think a CVSS score is useful for this vulnerability? With the attack complexity and requirement for physical proximity, this attack would be difficult to pull off. Attackers could more easily achieve similar goals with physical attacks. Yet, the worst potential risk is multiple, even numerous human deaths. That gives it a severity that we don't see in most computer security vulnerabilities.

Some people will say, "But we've never seen a case of someone exploiting a vulnerability like this in the wild to kill someone." So let's look at one more example, from my co-worker Allen Householder's excellent presentation, Systemic Vulnerabilities: An Allegorical Tale of Steampunk Vulnerability to Aero-Physical Threats. In it, he calculated the base CVSS 2.0 score of a specific Denial of Service vulnerability: the vulnerability of a building to an airplane being flown into it.

The base score was only a 6.5 (see the slides for the details). Prior to 9/11, we had never seen a case of someone exploiting this DoS vulnerability in the wild to kill someone. Even when he added Temporal and Environmental scores after 2001, when a confirmed case of it being exploited in the wild occurred, it only scored 7.8. Thousands of human deaths may trigger a war and a geopolitical shift, but they don't change CVSS scores.

My point is not that CVSS is not useful, but that it was not intended to measure safety. There are, however, other instruments already used in the safety engineering field, and even in IT security, that may be useful alternatives.

As a general purpose tool, the FIPS 199 "potential impact" levels may be useful. They include the possibility of human harm or loss of life when rating the impact of loss of an information system. They are very broad and organization-specific, however, and intend to judge entire information systems rather than specific vulnerabilities.

For automobiles, there is the ASIL standard (ISO-26262), which attempts to measure the risks of faults in cars. A more generalized version is the Safety Integrity Level (SIL - IEC-61508), which can be applied to vehicles, power plants, and many other things. These, however, measure safety in a vacuum and generally assume failures or natural disasters rather than intentional adversarial action.

There are a number of other standards, but most are specific to a given industry, as the ASIL standard is. However, I am unaware of a current method for measuring the severity of vulnerabilities in IoT devices that manages both

  • the technical detail necessary to prioritize remediation or mitigation
  • the additional impacts and attack vectors that exist when vulnerabilities affect the real world

For now, we at the CERT/CC will continue to include CVSS scores in our Vulnerability Notes for IoT devices. We will publish another blog post describing our methodology for scoring them once CVSS 3.0 has been more widely adopted and debated in an IoT environment.

We are also conducting research on better ways to measure cyber-physical vulnerabilities. It might be possible to expand CVSS to cover Safety better, or it might be necessary to create a new metric. Safety is more a measure of downstream impact of the vulnerability than the impact on the vulnerable device, so it may not fit well in CVSS. If you have any suggestions or favorite tools, please reach out to us or @certcc on Twitter.

About the Author