Category: Vulnerability Discovery

The CERT/CC Vulnerability Analysis team has been engaged in a number of community-based efforts surrounding Coordinated Vulnerability Disclosure lately. I've written previously about our involvement in the NTIA Multistakeholder Process for Cybersecurity Vulnerabilities. Today I'll highlight our ongoing work in the Forum for Incident Response and Security Teams (FIRST). We are currently active in two vulnerability-related working groups within the FIRST organization: the Vulnerability Coordination SIG (recently merged with the NTIA Multiparty Disclosure working group), and the Vulnerability Reporting and Data eXchange SIG (VRDX-SIG). At the CERT Vendor Meeting on February 29, I presented some of our current work within the VRDX-SIG. Given a number of developments in the intervening week I'm introducing that work to a broader audience in this post.

The CERT Coordination Center (CERT/CC) has been receiving an increasing number of vulnerability reports regarding Internet of Things devices and other embedded systems. We've also been focusing more of our own vulnerability discovery work in that space. We've discovered that while many of the vulnerabilities are technically the same as in traditional IT software, the coordination process has some substantial differences that will need to be addressed as the Internet of Things grows.

On September 29, Art Manion and I attended the first meeting of the Multistakeholder Process for Cybersecurity Vulnerabilities initiated by the National Telecommunications and Information Administration (NTIA), part of the United States Department of Commerce. There has been ample coverage of the meeting in blogs (e.g., by Dr. Neal Krawetz and by Cris Thomas), mailing lists, and media reports, so I won't attempt to duplicate that information.

During the course of the meeting, I became increasingly aware that no one had actually specified who the "stakeholders" are. The name of the process implies there are more than one, but nowhere in the accompanying materials was there a comprehensive list of who is perceived to hold a stake in this process. In this post, you'll find my take on who those stakeholders are and how the set of players has changed over the past decade.

Every day, we receive reports from various security professionals, researchers, hobbyists, and even software vendors regarding interesting vulnerabilities that they discovered in software. Vulnerability coordination--where we serve as intermediary between researcher and vendor to share information, get vulnerabilities fixed, and get those fixes out in the public eye--is a free service we provide to the world.

During the Watergate hearings, Senator Howard Baker asked John Dean a now-famous question: "My primary thesis is still: What did the president know, and when did he know it?" If you understand why that question was important, you have some sense as to why I am very concerned that "zero-day exploit capability" appears as an operative phrase in the Department of Commerce Bureau of Industry and Security (BIS) proposed rules to implement the Wassenaar Arrangement 2013 Plenary Agreements regarding Intrusion and Surveillance Items.

Recently, SuperFish and PrivDog have received some attention because of the risks that they both introduced to customers because of implementation flaws. Looking closer into these types of applications with my trusty CERT Tapioca VM at hand, I've come to realize a few things.

In this blog post, I will explain

  1. The capabilities of SSL and TLS are not well understood by many.
  2. SSL inspection is much more widespread than I suspected.
  3. Many applications that perform SSL inspection have flaws that put users at increased risk.
  4. Even if SSL inspection were performed at least as well as the browsers do, the risk introduced to users is not zero.

Hi folks, Allen Householder here. I want to introduce some recent work we're undertaking to look at vulnerability discovery for emerging networked systems (including cyberphysical systems like home automation, networked cars, industrial control systems and the like). In this post I cover the background and motivation for this work, our approach, and some preliminary findings. In future posts I will cover additional results from this effort.

Hi folks, it's Will. Recently I have been investigating man-in-the-middle (MITM) techniques for analyzing network traffic generated by an application. In particular, I'm looking at web (HTTP and HTTPS) traffic. There are plenty of MITM proxies, such as ZAP, Burp, Fiddler, mitmproxy, and others. But what I wanted was a transparent network-layer proxy, rather than an application-layer one. After a bit of trial-and-error investigation, I found a software combination that works well for this purpose. I'm happy to announce the release of CERT Tapioca (Transparent Proxy Capture Appliance), which is a preconfigured VM appliance for performing MITM analysis of software.

Hey, it's Will. I was recently working on a proof of concept (PoC) exploit using nothing but the CERT BFF on Linux. Most of my experience with writing a PoC has been on Windows, so I figured it would be wise to expand to different platforms. However, once I got to the point of controlling the instruction pointer, I was surprised to discover that there was really nothing standing in the way of achieving code execution.

Hacking the CERT FOE

By on in

Hey folks, it's Will. Every now and then I encounter an app that doesn't play well with FOE. You don't have to throw your hands up in defeat, though. Because FOE (and BFF) are written in Python, it's pretty easy to modify them to do what you like.

Hi folks, Allen Householder here. As Will Dormann's earlier post mentioned, we have recently released the CERT Basic Fuzzing Framework (BFF) v2.7 and the CERT Failure Observation Engine (FOE) v2.1. To me, one of the most interesting additions was the crash recycling feature. In this post, I will take a closer look at this feature and explain why I think it's so interesting.

Hello, Jonathan Foote here. In this post I'll explain how to use information from databases in stock Ubuntu systems to gather the parameters needed to perform corpus distillation (gathering of seed inputs) and fuzzing against the installed default file type handlers in Ubuntu Desktop 12.04. This technique applies to most modern versions of Ubuntu.

Hello, this is Jonathan Spring. I've been investigating the usage of domains that are typos of other domains. For example, foogle.com is a typo of google.com, and it's a common one since 'f' is next to 'g' on the standard keyboard. The existing hypothesis has been that typo domains would be used for malicious purposes. Users would commonly mistype the domain they are going to, and some of the less scrupulous domain owners could take advantage of this to trick them or infect their computers.

The WebReady and Data Loss Prevention (DLP) features in Microsoft Exchange greatly increase the attack surface of an Exchange server. Specifically, Exchange running on Windows Server 2003 is particularly easy to exploit.

It's public knowledge that Microsoft Exchange uses Oracle Outside In. WebReady, which was introduced with Exchange 2007, provides document previews through the use of the Oracle Outside In library. Outside In can decode over 500 different file formats and has a history of multiple vulnerabilities. See CERT vulnerability notes VU#520721, VU#103425, VU#738961, and VU#118913.

In May 2010, CERT released the Basic Fuzzing Framework, a Linux-based file fuzzer. We released BFF with the intent to increase awareness and adoption of automated, negative software testing. An often-requested feature is that BFF support the Microsoft Windows platform. To this end, we have worked to create a Windows analog to the BFF: the Failure Observation Engine (FOE). Through our internal testing, we've been able to help identify, coordinate, and fix exploitable vulnerabilities in Adobe, Microsoft, Google, Oracle, Autonomy, and Apple software, as well as many others. Our office shootout post is a good example of this testing.

Version 2.0 of the CERT Basic Fuzzing Framework (BFF) made its debut on Valentine's Day at the 2011 CERT Vendor Meeting in San Francisco. This new edition has a lot of cool features that we'll be describing in more detail in future posts, but we wanted to let you know that it's available so that you can download and try it.

Hi folks. I've been involved in a fuzzing effort at CERT. One of the ways that I've been able to discover vulnerabilities is through "dumb" or mutational fuzzing. We have developed a framework for performing automated dumb fuzzing. Today we are releasing a simplified version of automated dumb fuzzing, called the Basic Fuzzing Framework (BFF).

The Kill-Bit (or "killbit") is a Microsoft Windows registry value that prevents an ActiveX control from being used by Internet Explorer. More information is available in Microsoft KB article 240797. If a vulnerability is discovered in an ActiveX control or COM object, a common mitigation is to set the killbit for the control, which will cause Internet Explorer to block use of the control. Or will it?

Hi, it's Will. As previously mentioned, we have been investigating and discovering ActiveX vulnerabilities over the past few years. Today we released the Dranzer tool that we have developed to test ActiveX controls.

We've been using the Dranzer ActiveX fuzz testing tool for over three years, and we've found a large number of vulnerabilities with it. I've tagged a few of the US-CERT Vulnerability notes with the "Dranzer" keyword to show the sort of vulnerabilities we've been discovering with the tool.

Hi, this is Chad Dougherty of the Vulnerability Analysis team. One of the important roles that our team plays is coordinating vulnerability information among a broad range of vendors. Over the years, we have gained a considerable amount of experience communicating with vendors of all shapes and sizes. Based on this experience, we can offer some guidance to vendors about communicating product security issues.

Hello, its Ryan. We've noticed a misconception about IPv6 that is popular on the internet: that IPv6 addresses are hard to ping sweep because there are so many possible addresses. Ping sweeping can lead to port scanning, so this misconception is viewed as a security feature. In this post, I'll prove that, while it won't work across the internet, ping sweeping on the local network is easier in IPv6 than in IPv4.