search menu icon-carat-right cmu-wordmark

Like Nailing Jelly to the Wall: Difficulties in Defining "Zero-Day Exploit"

Headshot of Allen Householder

During the Watergate hearings, Senator Howard Baker asked John Dean a now-famous question: "My primary thesis is still: What did the president know, and when did he know it?" If you understand why that question was important, you have some sense as to why I am very concerned that "zero-day exploit capability" appears as an operative phrase in the Department of Commerce Bureau of Industry and Security (BIS) proposed rules to implement the Wassenaar Arrangement 2013 Plenary Agreements regarding Intrusion and Surveillance Items.

Background: BIS, Wassenaar, and "Zero-Day Exploit Capability"

The United States Department of Commerce Bureau of Industry and Security recently proposed a set of rules to implement the agreements by the Wassenaar Arrangement (WA) at the Plenary meeting in December 2013 regarding tools and technologies surrounding "intrusion software."

One particular comment in that proposal is relevant to this blog post: "Note that there is a policy of presumptive denial for items that have or support rootkit or zero-day exploit capabilities." This policy is implemented in the following line proposed for inclusion in section 742.6(b)(5) [excerpted for clarity]:

Applications for exports, reexports and transfers of cybersecurity items [list of specifics removed] will be reviewed favorably if [a number of caveats for regional stability], except that there is a policy of presumptive denial for items that have or support rootkit or zero-day exploit capabilities.

Further, Supplement No. 2 to Part 748--Unique Application and Submission Requirements--notes the following:

(iii) If the cybersecurity item has not been previously classified or included in a license application, then:

[Other requirements removed for clarity]

(C) For items related to "intrusion software," describe how rootkit or zero-day exploit functionality is precluded from the item. Otherwise, for items that incorporate or otherwise support rootkit or zero-day exploit functionality, this must be explicitly stated in the application.

The answer to question 1 of the BIS FAQ on Intrusion and Surveillance Items reads in part

Transferring or exporting exploit samples, exploit proof of concepts, or other forms of malware would not be included in the new control list entries and would not require a license under the proposed rule.

Later, the answer to question 15 includes this statement:

The only regulatory distinction involving zero-day exploits in the proposed rule regards the possibility that a delivery tool could either have (e.g., incorporate) or support (e.g., be 'specially designed" or modified to operate, deliver or communicate with) zero-day exploits. If the system, equipment component or software at issue has or supports zero-day or rootkit capabilities, then BIS could request the part of the software or source code that implements that capability. BIS does not anticipate receiving many, or any, export license applications for products having or supporting zero-day capabilities.

In writing this post, I attempted to draw a flowchart to map the decision process to discriminate between tools having zero-day exploit capabilities but not exploit samples, exploit proof of concepts. I also tried to generate the possible combinations of both the existence of and vendor and public awareness of vulnerabilities, exploits, and patches, and map those onto whether or not they qualified as "zero-day exploits" or "zero-day vulnerabilities." I failed in both cases.

The reasons I failed are twofold:

  1. There are many definitions of zero-day exploit available. These definitions are not merely diverse wordings that map onto the same concepts; they refer to distinct (albeit related) concepts. In other words, given the same state of affairs in the world, they yield different answers as to whether or not that state meets the definition.
  2. Common to all the definitions is a sense of history, summarized as "Who knew what, and when did they know it?" Note its resemblance to Senator Baker's query. The problem is that some information relevant to the definition only becomes available after certain decisions have been acted upon, and thus that information can not have a causal relationship to the decision in the first place.

I cover both points in more detail below following a brief introduction to why this topic is so relevant now.

You Keep Using That Word; I Don't Think It Means What You Think It Means

Many discussions that touch on vulnerability disclosure involve phrases like "zero-day vulnerability," "zero-day exploit," or simply "zero day" or "0day." However, I've noticed that there is a good deal of confusion as to the meaning of these terms. Security professionals have used these terms inconsistently, or at least they've done so in ways that make it unclear about which meaning they're using. Furthermore, inconsistent use of terms in media reports exacerbates confusion and concern among individuals, network defenders, and decision and policy makers. Finally, in the context of laws and regulations, inconsistent definitions of terminology can become the distinguishing factor as to whether or not one has committed a crime.

Normally I wouldn't write an entire blog post on the definition of terms since for most conversational purposes, loose definitions will suffice. However, when those loosely defined terms become the basis for decisions, policies, and regulations, it's important to get it right.

Like Nailing Jelly to the Wall

The BIS proposed rules that specifically refer to zero-day exploit capability. Setting aside what it means for something to have X capability, I'd like to demonstrate the difficulty in defining this particular X: What's a zero-day exploit? (For if we can't define X, then X capability must also remain undefined.) I went looking for definitions, and found a few:

1. "A zero-day exploit is one that takes advantage of a security vulnerability on the same day that the vulnerability becomes generally known. There are zero days between the time the vulnerability is discovered and the first attack." --SearchSecurity

The first definition is fairly specific, even if it doesn't really explain what "generally known" means. (Known to whom? What subset of the population must know about it for it to count as "generally known"?) But the rest of it is pretty clear: if the exploit is used on the same day that the vulnerability became "generally known," then it's a zero-day exploit.

Oh, but wait, does same day mean the same calendar day? In what time zone? Like the song says, "It's Five O'Clock Somewhere." So if the vulnerability is reported at 11:59 p.m. in your time zone and an exploit is reported five minutes later, is it still a zero-day exploit? Maybe?

What if we replace same day with within 24 hours. At least then we can say for certain that if the vulnerability is made public at 8:00 a.m. UTC on day 0 and the exploit is reported at 8:01 a.m. UTC on day 1, it's not a zero-day exploit. I don't know about you, but that strikes me as arbitrary and unsatisfying.

By the way, nothing in this definition talks about patch availability. We'll come back to that in a moment.

2. "A zero day exploit attack occurs on the same day a weakness is discovered in software. At that point, it's exploited before a fix becomes available from its creator." --Kaspersky

There's that same day again. I'll grant that weakness here is equivalent to vulnerability in definition 1. But this definition goes beyond just talking about a vulnerability and its exploit; it mentions a fix that becomes available.

Stating it explicitly: if the following events occur (a) a vulnerability is announced by a vendor, (b) a patch is provided along with the announcement, and (c) it is exploited on the same day (whatever you decide that means, just be consistent), definition 1 says it's a zero-day exploit while definition 2 says it isn't.

3. "An attack on a software flaw that occurs before the software's developers have had time to develop a patch for the flaw is often known as a zero-day exploit. The term "zero-day" denotes that developers have had zero days to fix the vulnerability. It can also refer to attacks that occur on the same day (day zero) a vulnerability is disclosed. In fact, some zero-day exploits are the first indication that the associated vulnerability exists at all." --Tom's Guide

There are two distinct definitions here: one is in the first sentence, and one is in the third. The third sentence equates to definition 1 above, so let's focus on the one in the first sentence.

Here we find that the definition hinges on the existence of a patch. A strict interpretation of this definition would permit someone to apply the zero-day exploit label even if the vulnerability is known to the vendor and the public long before the first attack. The vulnerability may have been known to the vendor for months, and a patch is in development but not does not yet exist. Thus definition 3 directly conflicts with both definitions 1 and 2 above. Definition 1 says nothing of patches. Definition 2 talks about patch availability, not existence.

4. "Zero-day attacks...software or hardware vulnerabilities that have been exploited by an attacker where there is no prior knowledge of the flaw in the general information security community, and, therefore, no vendor fix or software patch available for it." --FireEye

Granted, this definition is for a zero-day attack, but since it mentions exploitation, I think we are justified to include it here. FireEye adds hardware to our growing list of definitions. Further, they discriminate based on the state of knowledge of the general information security community, with the implication that if that community is unaware of the vulnerability, there must not be a patch available. From context, this general information security community appears to be larger than the affected vendor(s) yet smaller than the public. So while it shares some degree of overlap with the other definitions discussed above, it remains distinct in its referents.

"But," you say, "these are informal definitions that aren't meant to be interpreted as strictly as you're doing so here." Criticism acknowledged. Using colloquial definitions in a technically focused context may be inappropriate when there are important yet subtle distinctions at play. So let's review the academic literature.

5. "A zero-day attack is a cyber attack exploiting a vulnerability that has not been disclosed publicly. There is almost no defense against a zero-day attack: while the vulnerability remains unknown, the software affected cannot be patched and anti-virus products cannot detect the attack through signature-based scanning." --Leyla Bilge and Tudor Dumitras, Before we knew it: an empirical study of zero-day attacks in the real world

Again, we make the bridge from attack to exploit. Interestingly, this definition equates disclosed publicly with unknown. Yet we know that vendors are continuously made aware of vulnerabilities in their products that the public does not know about: coordinated disclosures are things that happen (and that we here at CERT/CC are often involved in facilitating them).

In this case, cannot be patched is not an assertion about the creation of a patch; rather it refers to the application of that patch to deployed vulnerable systems. Also, that point is presented as an implication of the definition rather than a part of the definition.

Interpreting definition 5 strictly, neither of the scenarios presented under definitions 2 or 3 above would qualify as zero-day attacks. Definition 4 differs from definition 5 in that it refers to the general information security community while definition 5 refers to public disclosure.

6. "A zero-day exploit is a new attack that an organization is not prepared for and can't stop. But there are conflicting definitions of zero-day, and different understandings regarding dates and times when an exploit becomes and/or ceases to be a zero-day exploit. The most practical definition of a zero-day exploit: An exploit that has no corresponding patch to counteract it. Technically, if the exploit code exists before the vulnerability is made public, it's a zero-day exploit -- regardless of how long the software vendor may have been aware of the vulnerability." --Brian T. Contos, Enemy at the water cooler: True stories of insider threats and enterprise security management countermeasures

Here we have a definition that at least acknowledges that other definitions exist, then hews fairly closely to definition 3 above.

7. "Zero-day exploit: An attack that exploits a zero-day vulnerability." --David A. Mundie and David M. McIntire, The MAL: A Malware Analysis Lexicon

Hmm. Is this definition talking about different things than those presented in definitions 1-4? I can't tell. I suppose we'll have to define zero-day vulnerability to figure that out. Conveniently, the MAL defines it for us:

8. "Zero-day vulnerability: A vulnerability that has not been disclosed to the general public and so can be exploited before patches are available."

Exploited prior to public disclosure. Easy enough. Everybody can agree to that, right? Wrong. Keep reading.

9. "A zero-day vulnerability is one that is unpublished. By definition, all vulnerabilities are zero-day before they are disclosed to the world, but practitioners in the art commonly use the term to refer to unpublished vulnerabilities that are actively exploited in the wild. We further distinguish zero-day vulnerabilities from published vulnerabilities as those for which no patch, upgrade, or solution is yet available from the responsible vendor, although some fail to make this distinction. " --Elias Levy, Approaching zero

Three different definitions appear here: (a) unpublished, (b) unpublished and exploited, (c) no patch available (regardless of exploitation status). Ugh. One more try?

10. "For the purposes of this paper, we formally define a 0Day vulnerability as any vulnerability, in deployed software, that has been discovered by at least one person but has not yet been publicly announced or patched." --Miles A. McQueen and colleagues, Empirical estimates and observations of 0day vulnerabilities

Given this definition we can describe the number of people who know about the vulnerability as greater than or equal to 1 but (significantly) less than the public. Also, patch status matters.

Lucky for us, this paper actually prefixes the above definition with the following caveat:

There is no generally accepted formal definition for "0Day (also known as zero-day) vulnerability." The term has been used to refer to flaws in software that no one knows about except the attacker. Sometimes the term is used to mean a vulnerability for which no patch is yet available.

I'm going to take the hint here and stop trying to pin this down further. You can probably see why I failed in my attempt to map this out in a concise flowchart.

Who Knew What, When?

The thing that is most clear to me from the above is that all definitions of zero-day exploit and zero-day vulnerability hinge on the state of knowledge of some subset of humanity at some point in time. Once discovered, there is always at least one person who is aware of the existence of the vulnerability. Beyond that the definitions largely vary based on who knows what, and when. This is the connection to Baker's question.

So far, we've established that all the definitions of zero-day exploit and zero-day vulnerability are time dependent. Moreover, they all incorporate the notion of surprise: in order for a vulnerability or its exploit to meet any of the definitions above, its existence must be surprising to someone. Furthermore, the definitions don't simply state that someone has to be surprised they indicate specific subsets of humans that must experience that surprise: either vendors, the security community, or the public, depending on which definition you prefer.

Now, think about that for a moment: What observable property intrinsic to a vulnerability could you point to that tells you this? Nothing. Why? Because surprise arises inside human skulls, not in the software, nor in the vulnerability report, nor in the exploit code, nor in any of the tools that support the discovery or development of these things. The adjective phrase zero-day is an assertion about human ignorance at a particular moment in time. It isn't an assertion about an intrinsic attribute of software, a vulnerability in that software, or an exploit for that vulnerability.

Complicating things further, not every vulnerability has exactly one vendor responsible for providing a patch. In the CERT/CC, our vulnerability disclosure coordination efforts often require us to work with multiple vendors as we try to synchronize the publication of vulnerability information with the release of patches. In situations where a vulnerability affects multiple vendors' products, public disclosure of one product's vulnerability can lead directly to the users of other products being put at risk because they are exploitable without recourse until a patch for their software is provided.

Even this scenario is too simple though. Some vulnerabilities affect multiple products from multiple vendors. This is a common occurrence for vulnerabilities that arise above the code level (e.g., protocol vulnerabilities) or when code is shared across products (e.g., third party libraries, example code that everybody copied and pasted, even a single developer who recreated the same error in multiple projects). So now we have a number of vendors and potentially distinct user groups that could be surprised by the existence of a vulnerability or its exploit. Should a vulnerability that affects 100 vendors' products be considered a zero-day if 99 vendors announce patches while one doesn't? What if 50 vendors patch and 50 don't? What if one vendor provides patches but 99 don't? What if that one vendor accounts for 90% of the users? 80%? 50%? 20%? 2%?

Most extant definitions of zero-day exploits and zero-day vulnerabilities completely fail to acknowledge this sort of multiparty process, and assume (naively) that a vulnerability report is between one vendor and one finder.

Conclusion

If you discover a vulnerability in a product and you want that vulnerability to get fixed, there's really no way around telling the vendor about it. At the point you make that decision though, you don't (and can't) actually know whether this particular vulnerability is new to them or not. If you find a vulnerability and for whatever reason you don't want it to get fixed, you still don't (and can't) know whether it is unknown to the vendor. You might have some degree of belief about that proposition, but the available facts are limited.

Likewise, the decisions you make to defend your network may be different given your knowledge (or lack thereof) of vulnerabilities, their exploits, and patches. If you have been exploited, you have work to do regardless of the availability of a patch. Whether the vulnerability or exploit deserved the zero-day prefix does nothing to help you clean up (although it might help you save face when you get called onto the carpet to explain the attack). Similarly, the availability of a patch gives you a clear course of action regardless of whether you have been exploited or not.

However, from the perspective of someone creating a tool to perform penetration testing, vulnerability scanning, or vulnerability discovery, there is nothing intrinsic to what the tool does or the knowledge that it embodies that lets you distinguish between its zero-day exploit capability and its exploit capability. As I've shown, all the relevant definitions that could be brought to bear depend on extrinsic factors involving the state of knowledge of others.

In technical contexts, we eschew the use of zero-day anything not because it is colloquial but because it is imprecise. Imprecision leads to confusion in technical discussions, and in the current situation, laws and regulations count as technical discussions. Confusion increases costs by creating a drag on decision making. Confusion also leads to a chilling effect as would-be security researchers will avoid performing research that leads to ambiguous legal outcomes and risk of prosecution.

At best, the phrase zero-day exploit serves as an attention grabber since it implies that you should pay attention and take some sort of action in response. Using that phrase as a discriminating term as in "a policy of presumptive denial for items that have or support rootkit or zero-day exploit capabilities" puts individuals and businesses at risk of noncompliance due not to their malicious intent but rather to the incomprehensible wording of the regulation.

It is my conclusion that no definition of zero-day exploit is possible that refers only to concepts intrinsic to vulnerabilities, their exploits, and their patches. Thus any tool that supports or provides exploit capability must also be presumed to include zero-day exploit capability since the differentiating factors occur in the world, not in the tool.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed