CVD Series: Principles of Coordinated Vulnerability Disclosure (Part 2 of 9)
PUBLISHED IN
CERT/CC VulnerabilitiesThis is the second post in a series about Coordinated Vulnerability Disclosure (CVD).
The material in this series represents a collective effort within the CERT/CC Vulnerability Analysis team. As such, it's difficult even for us to pin down who wrote which parts. However, to give credit where it's due, we'd like to acknowledge the content contributed by the following individuals over the past few years (listed alphabetically): Jared Allar, Allen Householder, Chris King, Joel Land, Todd Lewellen, Art Manion, Michael Orlando, and Garret Wassermann.
Placing CVD in Context
Coordinated Vulnerability Response (CVR) is the process of securing systems that starts with developing and releasing more secure code (e.g., as part of a Secure Development Lifecycle) and ends with patching systems (e.g., Vulnerability Management and general system administration tasks). Coordinated Vulnerability Disclosure (CVD) in the process of gathering information from security researchers, coordinating information between relevant stakeholders, and disclosing existence of software vulnerabilities and their mitigations. CVD is an important subtask of any successful CVR process.
A successful CVD program feeds vulnerability information back into the SDL and CVR process. This information can result in more secure development processes, helping to prevent the introduction of vulnerabilities in the first place.
Yet the reality of today's software development is that much software is based on legacy code that was not originally produced within such a secure development process. A 2006 study of OpenBSD attempted to determine how much legacy code impacts the security of modern software, and how large code base changes might introduce vulnerabilities [OZMENT2006]. The positive news is that foundational vulnerabilities - ones that existed in the very first release and carried through the most recent version of the software - decay over time. We can find them, fix them, and make the code base stronger overall. However, the negative news is that as the "low-hanging fruit" foundational vulnerabilities are fixed, the remaining foundational vulnerabilities tend to be more subtle or complex, making them increasingly difficult to discover.
Furthermore, continual code changes from ongoing development can also introduce new vulnerabilities, making it unlikely for the security process to ever be "finished". Even with modern architecture development and secure coding practices, software bugs (and in particular security vulnerabilities) will be introduced at some point as new features are added or code is refactored. This can happen for many reasons. A recent article highlighted the difficulty of getting teams of people to work together, with the consequences being bad software architecture [MATSUDAIRA2016]. While the authors were primarily concerned with maintainability and performance, bugs (and particularly security vulnerability bugs) are an important side effect of bad architecture and teamwork process. Another possibility is that, even with good internal processes and teamwork, no software model or specification can correctly account for all environments the software may operate in [WING1998]. If we cannot predict the environment, we cannot predict all the ways that things may go wrong. In fact, research has shown that it appears impossible to model or predict the number of vulnerabilities that may be found through tools like fuzzing - and, by extension, the number of vulnerabilities that exist in a product [BOBUKH2014]. The best advice seems to be to assume that vulnerabilities will be found indefinitely into the future, and work to ensure that any remaining vulnerabilities cause minimal harm to users and systems.
A successful CVD program helps encourage the search for and reporting of vulnerabilities while minimizing harm to users. Developers supporting a successful CVD program can expect to see the overall security of their code base improve over time as vulnerabilities are found and removed. CVD also enables the crowdsourcing of research toward discovering remaining vulnerabilities in a code base.
Principles of Successful CVD Programs
Over the years, the CERT/CC has identified a number of principles that guide our efforts in coordinating vulnerability disclosures which seem to be necessary for a successful CVD program. These principles include:
- Reduce Harm
- Presume Benevolence
- Avoid Surprise
- Incentivize Good Behavior
- Improve Processes
We cover each of these in more detail below.
Reduce Harm
Harm reduction is a term borrowed from the public health community. In that context, it is used to describe efforts intended to focus on reducing the harm caused by drug use and unsafe health practices, rather than on the eradication of the problem. For example, one of the tenets of harm reduction is that there will never be a drug free society, and so preparations must be made to reduce the harm of drugs that currently exist since we will never be completely free of them.
This tenet also applies to software vulnerabilities; research has shown that all software is vulnerable and will continue to be vulnerable, especially as code complexity continues to increase [ALHAZMI2006]. Software vulnerabilities will never go away. What can change are the practices organizations use to reduce the harm caused by vulnerabilities. Since vulnerabilities are going to happen and continue to exist despite our best efforts, CVD works best when it focuses on reducing the harm vulnerabilities can cause.
Some approaches to reducing the harm caused by vulnerable software and systems include:
- Publishing vulnerability information. Providing better, more timely, more targeted, automated dissemination of vulnerability information so defenders can make informed decisions and take action sooner.
- Balancing the ability for system defenders to take action while avoiding the increase of attacker advantage.
- Reducing days of risk. [ARORA2006]
- Encouraging the widespread use of exploit mitigation techniques on all platforms.
- Shortening the time between vulnerability disclosure and patch deployment.
- Reducing time-to-patch by automating patch distribution using secure update mechanisms that make use of cryptographically-signed updates or other technologies.
- Releasing high-quality patches that have been thoroughly tested to increase the defenders' trust. Increasing defenders' trust that patches won't break things or have undesirable side effects reduces lag in patch deployment by reducing the defenders' testing burden.
Presume Benevolence
According to [BEAUCHAMP2013], the principle of beneficence "refers to a normative statement of a moral obligation to act for the others' benefit, helping them to further their important and legitimate interests, often by preventing or removing possible harms". Benevolence refers to the morally-valuable character trait or virtue of being inclined to act to benefit others. In short, benevolent people are inclined to help others and follow the principle of beneficence. In addition to other topics, this principle certainly applies to software security research.
Below, we list some common researcher motivations such as benevolence in the table below (adapted from a conference talk by Joshua Corman of iamthecavalry):
Protect Want to make the world a safer place (benevolence) Puzzle Curiosity, enjoys challenge; typically tinkerers, hobbyists Prestige/Pride Seeking recognition, making a name, and/or building a career Profit/Professional Seeking monetary reward and/or making a living off of it Politics/Protest Pro/Anti-* for ideological or principled reasons
In terms of a CVD program, we have found that it is usually best to assume that any individual that has taken the time and effort to reach out to developers or a coordinator to report an issue is likely benevolent, has a strong Protect motivation, and sincerely wishes to reduce the harm of the vulnerability. While each researcher may have secondary individual personal goals (some were listed above), and may even be difficult to work with at times, associating any of these other characteristics to a particular researcher can color your language and discussions with them.
This isn't to say you should maintain your belief that researcher is acting in good faith when presented with evidence to the contrary. Rather, one should keep in mind that participants are working toward a common goal: reducing the harm caused by deployed insecure systems.
Avoid Surprise
"People are part of the system. The design should match the user's experience, expectations, and mental models." [SALTZER2009]
Saltzer's observation has two possible applications. First, at an individual product level, software needs to match users' expectations as much as possible; when software allows a user to easily make a mistake with security consequences, then the software has a "surprising" action from the point of view of the user. We generally consider this a user interface security flaw that must be remedied. Participating in the CVD process means learning about these issues before they potentially cause large harm.
Second, at a global coordination level, if we expect cooperation between all parties and stakeholders, we should match their expectations of being "in the loop" and minimize surprise. Disclosing a vulnerability without coordinating first results in panic and an aversion to future cooperation; CVD is the best way to ensure continued cooperation, and that future vulnerabilities will also be addressed and remedied.
The classic example of a coordination failure is during the vulnerability disclosure of Heartbleed. Two organizations, Codenomicon and Google, both discovered the vulnerability around the same time. When the vulnerability was reported a second time to the OpenSSL team, they assumed a possible leak and the vulnerability was quickly disclosed publicly. A more coordinated response may have allowed further remediation to be available immediately at disclosure time.
Incentivize Good Behavior
Not everyone shares the same perspectives, concerns, or even ethical foundations, so it's not reasonable to expect everyone to play by your rules. Keeping that in mind, we've found that it's usually better to reward good behavior than try to punish bad behavior. Part of any CVD process includes an appropriate level of community outreach. Such incentives are important as they increase the possibility of future cooperation between researchers and organizations.
Incentives can take many forms:
- Recognition - Public recognition is often used as a reward for "playing by the rules" in CVD.
- Gifts - Small gifts (or "swag") such as T-shirts, stickers, etc., give researchers a good feeling about the organization.
- Money - Bug bounties turn CVD into piece work.
- Employment - We have observed cases where organizations choose to hire the researchers who report vulnerabilities to them, either on a temporary (contract) or full-time basis. This is of course neither required nor expected, but having a reputation of doing so can be an effective way for a vendor to encourage positive interactions.
Process Improvement
After gathering data from a successful (or even unsuccessful) CVD process, the organization should capture ideas that worked well, and note failures. There are two processes in particular that should be able to benefit from the feedback provided in the performance of CVD:
- The Secure Development Lifecycle
- The CVD process itself
The CVD process can create a pipeline allowing establishment of regular patching cycles, and may reveal blocking issues that prevent a more efficient software patch deployment mechanism. As stated previously, a CVD program allows some crowdsourcing of the security research and testing of your products; note, however, that CVD is complementary to your own internal research and testing as part of the Software Development Lifecycle, and not meant as a wholesale replacement.
CVD may also provide aid beyond harm reduction practices such as software testing and patch deployment:
- Reduce creation of new vulnerabilities
- Increase pre-release testing to find vulnerabilities
For example, participation in CVD may allow discussions between your developers and security researchers on new tools or methods such as static analysis or fuzzing that your organization was previously unaware of. These tools and methods can then be evaluated for inclusion in the standard development process if there is good success in discovering bugs and vulnerabilities in your product. Essentially, CVD facilitates "field testing" of new analysis methods for finding bugs before the software is even released publicly.
These five principles of CVD are just a starting point - we will explore the phases of the coordinated vulnerability disclosure process as well as tackle numerous other issues in future blog posts. We invite you to join conversation on CVD with us by sending comments to cert@cert.org or on twitter @certcc.
References
[ALHAZMI2006] O. H. Alhazmi, et al. Measuring, analyzing and predicting security vulnerabilities in software systems, Computers & Security (2006), <doi:10.1016/j.cose.2006.10.002>.
[ARORA2006] Arora, A., Nandkumar, A., & Telang, R. (2006). Does information security attack frequency increase with vulnerability disclosure? An empirical analysis. Information Systems Frontiers, 8(5), 350-362.
[BEAUCHAMP2013] Beauchamp, Tom, "The Principle of Beneficence in Applied Ethics", The Stanford Encyclopedia of Philosophy (Winter 2013 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2013/entries/principle-beneficence/>.
[BOBUKH2014]. Equation of a Fuzzing Curve. Eugene Bobukh's blog, Microsoft Developer Network. 18 December 2014 (Part 1) and 6 January 2015 (Part 2). Accessed 19 July 2016. <https://www.microsoft.com/en-us/research/blog/12-18-14-equation-of-a-fuzzing-curve-part-1-2/> and <https://www.microsoft.com/en-us/research/blog/12-18-14-equation-of-a-fuzzing-curve-part-1-2/>.
[MATSUDAIRA2016] Kate Matsudaira. Bad Software Architecture Is a People Problem. Communications of the ACM, Vol. 59, No. 9, September 2016. <https://cacm.acm.org/magazines/2016/9/206258-bad-software-architecture-is-a-people-problem/fulltext>.
[OZMENT2006]. Andy Ozment and Stuart E. Schechter. Milk or Wine: Does Software Security Improve with Age?. In Proceedings of 15th USENIX Security Symposium. <https://www.usenix.org/legacy/event/sec06/tech/full_papers/ozment/ozment.pdf>.
[SALTZER2009] J. H. Saltzer and Frans Kaashoek. Principles of computer system design: an introduction. Morgan Kaufmann, 2009, p. 85. ISBN 978-0-12-374957-4. <https://books.google.com/books?id=I-NOcVMGWSUC&pg=PA85&hl=en#v=onepage&q&f=false>.
[WING1998] Wing, Jeanette M. A Symbiotic Relationship Between Formal Methods and Security. In Proceedings of Workshops on Computer Security, Dependability and Assurance: From Needs to Solution (CSDA 1998). IEEE, 1998. <https://ieeexplore.ieee.org/Xplore/cookiedetectresponse.jsp>.
More By The Author
More In CERT/CC Vulnerabilities
PUBLISHED IN
CERT/CC VulnerabilitiesGet updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.
Subscribe Get our RSS feedMore In CERT/CC Vulnerabilities
Get updates on our latest work.
Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.
Subscribe Get our RSS feed