Heartbleed: Q&A
PUBLISHED IN
Secure DevelopmentThe Heartbleed bug, a serious vulnerability in the Open SSL cryptographic software library, enables attackers to steal information that, under normal conditions, is protected by the Secure Socket Layer/Transport Layer Security(SSL/TLS) encryption used to secure the internet. Heartbleed and its aftermath left many questions in its wake:
- Would the vulnerability have been detected by static analysis tools?
- If the vulnerability has been in the wild for two years, why did it take so long to bring this to public knowledge now?
- Who is ultimately responsible for open-source code reviews and testing?
- Is there anything we can do to work around Heartbleed to provide security for banking and email web browser applications?
In late April 2014, researchers from the Carnegie Mellon University Software Engineering Institute and Codenomicon, one of the cybersecurity organizations that discovered the Heartbleed vulnerability, participated in a panel to discuss Heartbleed and strategies for preventing future vulnerabilities. During the panel discussion, we did not have enough time to address all of the questions from our audience, so we transcribed the questions and panel members wrote responses. This blog posting presents questions asked by audience members during the Heartbleed webinar and the answers developed by our researchers. (This webinar is no longer available.)
I have been a software vulnerability analyst with the CERT Coordination Center (CERT/CC) since 2004 with a focus on web browser technologies, ActiveX, and fuzzing. In addition to myself, answers to audience questions from the Heartbleed panel discussion are provided by:
- Brent Kennedy is a member of the CERT Cyber Security Assurance team focusing on penetration testing operations and research. Kennedy leads an effort that partners with the Department of Homeland Security's National Cybersecurity Assessments and Technical Services (NCATS) team to develop and execute a program that offers risk and vulnerability assessments to federal, state, and local entities.
- Jason McCormick has been with SEI Information Technology Services since 2004 and is currently the manager of network and infrastructure engineering. He oversees datacenter, network, storage, and virtualization services and plays a key role in information security policy, practices, and technologies for the SEI.
- William Nichols joined the SEI in 2006 as a senior member of the technical staff and serves as a Personal Software Process (PSP) instructor and Team Software Process (TSP) mentor coach in the Software Solutions Division at the SEI.
- Robert Seacord is a senior vulnerability analyst in the CERT Division at the SEI, where he leads the Secure Coding Initiative. Seacord is the author of The CERT C Secure Coding Standard (Addison-Wesley, 2014) and Secure Coding in C and C++ (Addison-Wesley, 2002) as well as co-author of two other books. He is also an adjunct professor at Carnegie Mellon University.
Attendee: Did anyone in the information security industry have suspicions about the security of OpenSSL before the Heartbleed story broke in the media?
Will Dormann: Both Google and Codenomicon had investigated OpenSSL and discovered the Heartbleed vulnerability before its public release. Whether OpenSSL was specifically targeted is unclear. It is likely that a number of SSL/TLS libraries were tested, and it just happened that OpenSSL behaved unexpectedly, which is due to the vulnerability.
Attendee: It seems as though the attacks did not start, as far as we know, until after the vulnerability was publicly announced. Has there been any effort to create avenues to distribute patches to vulnerabilities like this prior to publicly announcing the vulnerability?
Will Dormann: We do not know when the attacks started. Before Heartbleed was publicly disclosed, we did not even know what to look for in an attack. Therefore, it is possible that the vulnerability was being attacked as early as two years ago, when the vulnerability was first introduced. The CERT Coordination Center (CERT/CC) offers support in coordinating vulnerabilities among affected vendors before public release. This minimizes the rushed efforts required by software vendors to produce updates after public disclosure. In this case, the CERT/CC was not involved in the pre-disclosure coordination of the OpenSSL vulnerability.
Attendee: I'm curious; they said this vulnerability has been in the wild for two years. Why did it take so long to bring this to the public knowledge now? It looks fishy.
Will Dormann: With any vulnerability that is discovered, there is some delay between its introduction and its discovery. It is quite common for vulnerabilities to go unnoticed for years.
Attendee: Is the code part of the browser or part of the server-side platform? If the browser, is there anything we can do to work around Heartbleed to provide security for banking and email web browser applications?
Brent Kennedy: Heartbleed mostly affects server-side applications, but there are some client applications (and most likely more to come) that reportedly are affected. For vulnerable servers, the vulnerability exists on the server itself, not the client accessing it. No major browsers implement OpenSSL, so determining if you are safe when accessing email or online banking is dependent on your provider. While most banks reported no issues, it is worthwhile to check their status.
Attendee: What is involved in "fully" patching this vulnerability at each impacted company? This seems to be a gray area at present.
Jason McCormick: There is a black-and-white answer for the concept of being fully patched from the perspective of eliminating the acute vulnerability. That is simply to upgrade to a version of OpenSSL that is not vulnerable to Heartbleed, or to upgrade your software system that is using OpenSSL as a component to a version that addresses the Heartbleed vulnerability. That Open SSL upgrade "fully patches" the issue.
The big gray area is what to do next. There is no one-size-fits-all solution here; unfortunately, organizations will have to make decisions based on their own risk tolerances and costs.
Every organization should immediately be re-issuing certificates on Internet-facing systems that were vulnerable to Heartbleed as quickly as possible. The potential compromise of private key material that could be used for decryption of captured data or the impersonation of sites makes this an important consideration.
Organizations should consider their risk/cost trade-offs for their insider risk as well. For example, a university-style situation, where you have a large, heterogeneous user base with a culture of limited controls for open access, will have a much different risk/cost analysis for changing internal certificates than a corporation with a well-controlled, well-known user population with strict internal controls.
Finally, you have to conduct a risk assessment of the other contents of the server or service that was affected by Heartbleed:
- Are passwords in play for a web application?
- What is the risk to compromised account information?
- What other information was the server/service working on such that fragments of it may have been in memory?
Only by a thoughtful analysis can each organization determine what, if any, their next steps should be. That calculation will be different for every organization.
Attendee: For our personal home computers, how do we get the updates to eliminate the Heartbleed vulnerability?
Jason McCormick: Unless you are a home hobbyist and are affected by Heartbleed by running a server, there is nothing that home users need to do with their computers other than always keeping their software updated.
The most important step that individual users can take is changing their passwords on the services they consume such as webmail, social media, banks, etc. Most large companies have announced publicly (to some degree) whether they were affected and issued recommendations for their services.
It is always a good idea to perform a regular change of your passwords regardless, so now might be a good time to do it. Additionally, even if a service you use was not affected by Heartbleed, if you used the same password for multiple sites and services, one of them may have been compromised, which means they are all compromised.
Yes, there is value in knowing whether or not an organization has re-issued their certificates, but that it too complicated for the average home user to understand. Beyond that, there is no reasonable way to know whether an organization has re-issued their certificate. By this time, we hope most organizations have done the right thing for their users.
Finally, many larger online services are offering two-factor authentication (TFA) that requires a PIN-like code in addition to a password. This is accomplished using an authenticator client on a computer or smartphone or an SMS text-message based system. As part of your login process, you would enter your username and password and on the following screen the numeric code printed on the authenticator app or from a text message. Anywhere and everywhere this service is offered, users should be taking advantage of it. The use of a TFA for logins greatly mitigates the risk of compromised and weak passwords.
Attendee: Shouldn't organizations also check their applications that act as Secure Socket Layer/Transport Layer Security clients -- whether those are desktop or Web applications, developed in-house or externally -- if they use outdated versions of OpenSSL? This vulnerability can also be exploited against clients, not just servers. Couldn't the memory of those client applications also contain sensitive information that could be stolen if they connect to a malicious or compromised server?
Brent Kennedy: The short answer is "yes" although the server side vulnerability carries a greater risk. Exploiting Heartbleed via a client-side application would be a multi-step attack. The attacker would have to stand up a malicious SSL/TLS server and trick the user into visiting that server using a vulnerable client. In the event that it does happen, memory would be dumped from the user's host machine. This could contain anything that is actively being processed on the computer, not just data related to the specific client application.
Jason McCormick: It is always good practice to keep all systems updated, including clients. While it is theoretically possible to attack a pure client using Heartbleed, most attacks are not practical en masse, both because browsers do not use OpenSSL (Internet Explorer uses the Microsoft Crypto library; Firefox and Chrome use network security services [NSS]) and because attacking the clients using Heartbleed would require an initial compromise such as phishing a person to connect to a malicious site.
Attendee: You mentioned that website owners might want to get new SSL certificates and revoke the old ones. But how should they mitigate against the fact that browsers and most TLS clients have broken certificate revocation checking where they soft fail when they don't get an online certificate status protocol (OCSP) response--they accept the connection. This means a man-in-the-middle attacker who obtains a certificate through Heartbleed can impersonate the site to users possibly indefinitely even if the old certificate is revoked by also blocking OCSP responses to those users.
Will Dormann: It is true that a certificate revocation may not be honored by a client application. However, that is not reason to skip the revocation in the first place. For more details, see https://news.netcraft.com/archives/2014/04/24/certificate-revocation-why-browsers-remain-affected-by-heartbleed.html.
Jason McCormick: The short answer here is you can't, at least not practically, without both increasing the industry-wide robustness of OCSP services and implementing different default behaviors in the browsers. OCSP soft-fail is a deliberate behavior choice by browser makers because hard fail would cause a serious disruption to many user experiences (which is a debate for another time).
OCSP stapling (or TLS Certificate Status Request Extension) is a great step forward here, but unfortunately it is not widely implemented yet. Thought leaders in the IT world need to be pushing concepts and technologies like OCSP stapling forward at every opportunity.
Attendee: Obviously, Heartbleed exposes a number of flaws in our security infrastructure (e.g., OpenSSL is being maintained by a very small number of people). I'd like to hear some about how the panelists view the resiliency of certificate authentication when stressed by something like Heartbleed.
Jason McCormick: I can't say that I agree with the opening statement of this question, that Heartbleed exposes any fundamental flaw in the architecture of our varying security infrastructures. While the Heartbleed vulnerability is serious and pervasive, it is fundamentally a coding mistake. Heartbleed is not revealing a protocol weakness such as BEAST, CRIME, or the renegotiation attacks.
As for OpenSSL being maintained by a small number of people, that is very true, but it sounds as if plans are in the works to change that through The Linux Foundation. This is great news and hopefully will lead to faster and better evolution of the OpenSSL system. The interesting issue that persists though is how you find quality cryptographers who are also good programmers and who can do work on OpenSSL. This is a hard combination to come by and I hope the financing that is expected to flow to OpenSSL can overcome some of these challenges. This is a very unfortunate bug, and I am sure more bugs will be found in this software just as all other software has bugs. I do not think, however, that OpenSSL is fundamentally broken such that it should be abandoned or considered fundamentally flawed.
Additionally, Heartbleed also has nothing to do with certificate authentication. It is an important point that the Heartbleed vulnerability itself, while damaging, is limited to a particular function of the TLS protocol used for "keepalive" checks and for path MTU discovery for DTLS connections. X.509 certificate authentication and authorization is an entirely different function within the TLS protocol specification. It is used during the handshake phase of the TLS session establishment to check identity and establish the encrypted transport session. Certificate authorities and the related constellation of technologies and protocols, some of which do have some interesting challenges, are entirely separate from TLS and OpenSSL.
Attendee: A mature process through the entire software development cycle is essential to reducing this type of vulnerability. Can a mature process that will catch defects like Heartbleed dovetail with an Agile software development approach?
Bill Nichols: The short answer is "yes." There is no wide agreement on the specific practices in Agile, but it is generally agreed that Agile involves delivering value to the user. Vulnerabilities deliver negative value and therefore are anti-Agile by definition.
Attendee: In my 25+ years of consulting in programming circles, MANY large and well-known corporations are merely "maintaining" code (adding Band-Aids, enhancement) and not building from scratch. That aside, developers should not be the main source of evaluating the resulting code -- for adherence of standards, injection of vulnerabilities, etc.
Bill Nichols: If developers are maintaining or enhancing code, they should take responsibility for any changes they introduce. On many old code bases, this is hard work indeed. Excessive change is dangerous because changes often introduce new problems. The developers should not be the last line of defense, but they should be among the first and take personal responsibility not to allow vulnerabilities to escape. The tools checking code after development are absolutely essential, but very imperfect. The only way to get clean code out is to put clean code in. Moreover, someone must review each and every find from those tools.
Attendee: Given this issue has existed in practice for many years, what would the panelists suggest are lessons for various positions in in the ecosystem who all missed this?: (1) open-source code reviewers, (2) component integrators, (3) testers etc., (4) auditors?
Bill Nichols: Reviewing code is hard. The following practices have been known to work:
- Review only a 200 to 300 lines in a single sitting.
- Use a checklist of items.
- Review the entire section of code for a single item from your checklist before moving on.
- Write your own checklist for review so that you recognize the problem in the code immediately when you see it.
- You must inspect, not read the code. If you take less than an hour for 200 lines of code, you have probably gone too fast.
Many separate studies have found that review rates of more than 200 lines per hour are not very effective. For many codes, it is likely that the inspector will, on average, find less than one issue per hour. This low rate contributes to making inspection difficult to perform because it feels slow and unrewarding. Nonetheless, that find rate is many times faster than integration or system test.
For integrators and auditors, I recommend running compilers with all checks turned on. Follow up with a low-cost static check tool such as SonarQube, then run two or more proprietary static analysis tools, for example, Coverity and CAST among others. Our experience has been that using analysis tools is better, than relying on a single product because analysis tools tend to have limited overlap in their results. Resolve each and every issue. There will be a large portion of false positives. If you cannot afford to resolve all the issues, you should recognize that 10 to 15 percent are likely to be real defects. Oddly enough, some people have found that the lower ratio of total finds (positive and negative), the lower the true positive to false positive ratio.
You should definitely consider dynamic checking tools such as fuzzers. You should re-inspect any module in which you discover a defect in test.
Robert Seacord: There are many lessons here. One is that developers, reviewers, and auditors should make sure they have an up-to-date and comprehensive knowledge of secure coding. The SEI provides secure coding training and numerous books have been written on the subject.
Attendee: Robert, is it realistic to burden software developers with an ever-increasing set of things they need to worry about rather than building the solutions to these problems into their languages and tools?
Robert Seacord: I am not sure if it is realistic, but because developers are the last line of defense between the languages and tools they are using and deploying vulnerable products, they need to shoulder the burden. We are heavily involved in language standards committee work to try to help improve the inherent security of these languages. You can look at David Keaton's blog post to see some of the specific improvements we have made in C11.
Bill Nichols: Mistake-proofing the development environment is desirable, but Robert described some barriers. Moreover, a seemingly safer environment can lead to compensating behavior (i.e., the Peltzman effect) that can undermine the improvements. Regardless of improvements to the environment, developers must hold themselves to a high standard. Our experience suggests that a large portion of the vulnerabilities result from mistakes programmers make routinely, but can be found and removed with some discipline. Developers need to learn and apply the meta-tools required to do good work. They need to understand and use sound design principles, sound coding practices, and effective review, inspection, and test. If these techniques are applied diligently, the exposure can be reduced by a factor of between 5 and 20 without adding overall cost to development.
Attendee: Shouldn't SSL_malloc()'s return be checked to be a valid pointer prior to using it as destination for memcpy? What if SSL_malloc() returned a NULL on memory exhaustion?
Robert Seacord: Yes. This is almost certainly a violation of ERR33-C. Detect and handle standard library errors.
Attendee: Is C++ a suitable language for writing security-critical code or are there more secure languages available that would avoid problems such as this?
Robert Seacord: There are languages that are less susceptible than C and C++ to reading or writing memory outside of the bounds of an object, as occurred in Heartbleed. Whether or not these languages are appropriate for a particular system depends on a number of factors including the type of application, existing code, and the knowledge and skills of the developers. Networking applications are frequently written in C because of the need to optimize performance and because of the bit level manipulations frequently required.
Attendee: Had the OpenSSL coders used something like SafeC implementation, do you think HeartBleed bug would not have occurred?
Robert Seacord: Probably not, but the reason may have been because OpenSSL might not have been widely adopted if it had not been written in a language for which compilers and tools were widely available. Language is part of the issue, but there is no such thing as a secure language.
Attendee: Has anyone determined if the vulnerability would have been detected by static analysis tools?
Robert Seacord: The vulnerability was not detected by static analysis, either because the analysis was not performed or because many analysis tools such as Coverity would not have detected the problem because of the number of levels between inputting the tainted value and using it. Many static analysis tools (including Coverity) can now detect this vulnerability, particularly if the code is annotated so that the tool is aware that the certain macros/functions return tainted inputs.
Attendee: A majority of U.S., federal, civilian, DoD, and Intel websites have unknown anomalies that potentially are very similar to the "Open SSL" issue. The June deadline for FedRamp is quickly approaching that mandates certification prior to Authority to Operate. What actions should the U.S. Federal Government (including Intelligence) be taking to prevent similar unknown vulnerabilities and anomalies? What differentiated this particular proprietary fuzzing tool (rather than open-source that did not find the anomaly) that caused this tool to find Heartbleed? Should the U.S. CERT be advising the U.S. Federal Government to be redirecting resources to determine anomalies prior to "Authority to Operate" (ATO) using a proprietary tool to prevent hacking of their networks to ensure zero day vulnerabilities prior to ATO?
Robert Seacord: In general, there is no one tool that is likely to catch all possible problems. The best solution is to use a collection of dynamic and static analysis tools (and to develop the code securely to begin with). SEI/CERT's approach is to use the SCALe to provide conformance testing of source code against secure coding standards using a variety of static analysis tools. This is consistent with the requirements of Section 933 of the FY13 National Defense Authorization Act.
Attendee: Could you recommend some static program analysis tools?
Robert Seacord: As an FFRDC, we cannot endorse any tools. In general, the static analysis tools tend to have non-overlapping capabilities, so you may need to use more than one. For the Source Code Analysis Laboratory (SCALe), we use Coverity, FindBugs, Fortify, and Eclipse to analyze Java code and Coverity, Fortify, LDRA, Microsoft Visual C++, PCLint, GCC, and Compass Rose for C language systems.
Attendee: What are the implications of the general use of fuzzing tools for the U.S. federal government? Should a minimum standard be established for what a fuzzing tool should accomplish to ensure no zero day vulnerabilities?
Will Dormann: CERT works with vendors to encourage them to use fuzzing tools. If they do not fuzz on their own, somebody else will, and they may discover vulnerabilities using that technique. It would be nice if there were some sort of standard for fuzzing robustness. However, enforcing a requirement can be difficult. How can one objectively measure the amount of fuzzing that an application or library has endured? There are so many variables at play that it is not as simple as saying that "Application X withstood Y number of fuzzing iterations without crashing."
The vision of ensuring no zero day vulnerabilities is a worthy goal, but perhaps not ever achievable. Also consider the fact that some vulnerabilities are discovered without the use of fuzzing at all. The idea of fuzzing metrics has been batted around in the past (http://dankaminsky.com/2011/03/11/fuzzmark/); however, it appears that not much progress has been made in this area.
Attendee: There's been a lot of chatter about open source being the problem. Do the panelists think a closed-source solution would've fared any better? There seem to be plenty of vulnerabilities regardless of open/closed status, so shouldn't the lesson learned be that regardless of open/closed status, we need to do a better job of coding our software securely and continuing to test it even after it's released?
Will Dormann: Open-source software is not inherently less secure or more secure than closed-source software. Vulnerabilities can be discovered without the availability of source code. The quality of the code being written is what affects the quality of the applications and libraries that we use.
Looking Ahead
As the answers above demonstrate, Heartbleed is fundamentally a coding mistake and one that could have been prevented. Through open exchanges like this, we hope to prevent future vulnerabilities. We welcome your feedback in the comments section.
More By The Author
PUBLISHED IN
Secure DevelopmentGet updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.
Subscribe Get our RSS feedGet updates on our latest work.
Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.
Subscribe Get our RSS feed