Finding Android SSL Vulnerabilities with CERT Tapioca
PUBLISHED IN
CERT/CC VulnerabilitiesHey, it's Will. In my last blog post, I mentioned the release of CERT Tapioca, an MITM testing appliance. CERT Tapioca has a range of uses. In this post, I describe one specific use for it: automated discovery of SSL vulnerabilities in Android applications.
As mentioned in my previous blog post, one of the uses of CERT Tapioca is discovering applications that fail to properly validate SSL certificates for HTTPS connections. As a proof-of-concept experiment, I took an Android phone and loaded some apps onto it. By bridging the "inside" network adapter of Tapioca to a wireless access point, I was able to create a WiFi hotspot that would automatically attempt to perform MITM attacks on any associated client.
Using a physical phone worked fine, and I was able to easily and quickly test a handful of apps. The problem is that this sort of testing environment doesn't scale. The Google Play store currently has about 1.3 Million applications, of which about 1 million are free. If it takes me 60 seconds to test each application, and if I've done my math correctly, it would take me a bit over 8 years to test each free Android application, assuming that I put in a 40 hour week for 52 weeks a year. While it was fun to test a handful of applications, I'm pretty sure that I would get bored before I made it through all of them. And I'd like to think that there are more valuable uses of my time.
Automation to the Rescue
Computers are great for performing tedious, boring work. Why not let them do the work? So how can we automate testing Android applications?
First, I started with the Android Emulator that comes with the Android SDK. I installed it in a Linux virtual machine and created an Android virtual device. Because ARM Android is emulated rather than virtualized, it's very slow. So after the Android Virtual Device (AVD) completely powered up, I took a snapshot of the powered-on Linux virtual machine that it was running in.
I also had an instance of CERT Tapioca providing network connectivity to the Android Emulator VM. The inside network adapter of Tapioca was connected to the same virtual network as the adapter for the Android Emulator VM.
With that done, I wanted to control the AVD as well as the Linux OS running it. I ended up using vmrun, which is based on the vix api, since I'm using VMware for virtualization. And with vmrun, I can control a lot. In particular, I'm using
- vmrun stop
- vmrun revertToSnapshot
- vmrun start
- vmrun runProgramInGuest
- vmrun copyFileFromHostToGuest
- vmrun copyFileFromGuestToHost
If you wanted to abstract the virtualization automation layer, I'm sure you could use something like libvirt. Using the above commands in conjunction with the Android adb command and helper scripts, I can pretty easily do the following:
- Restart the Tapioca capture using ~/restartcap.sh. (Launch it in an xterm if you want to see the terminal.)
- Revert the Linux VM to the point where the pristine instance of the AVD is running.
- Copy an APK to the Linux VM.
- Install the APK into the AVD using adb.
- Launch the installed app using aapt to extract the package and activity from the newly-installed apk and adb to invoke the package's activity.
- Grab the pcap and flows.log files from Tapioca.
Lather, rinse, repeat. I can do this procedure for every APK file that I have. But this approach is still boring. I will find only applications that chat insecurely over HTTPS with no user interaction. We can do better.
The Android SDK has two tools that can help with UI automation:
- MonkeyRunner - This tool runs a scripted set of UI operations.
- Monkey - This tool runs random UI operations.
To aid with my ability to use MonkeyRunner to tease out SSL bugs, I modified my AVD to include a directional pad. To make this modification, I edited my ~/.android/avd/ssltest.avd/config.ini file to include the following line:
hw.dPad=yes
With a directional pad, this allows me to have a MonkeyRunner python script that has instructions like the following:
device.press('KEYCODE_DPAD_CENTER', MonkeyDevice.DOWN_AND_UP)
device.press('KEYCODE_DPAD_DOWN', MonkeyDevice.DOWN_AND_UP)
device.type('asdf1234-user')
device.press('KEYCODE_DPAD_CENTER', MonkeyDevice.DOWN_AND_UP)
device.press('KEYCODE_DPAD_DOWN', MonkeyDevice.DOWN_AND_UP)
device.type('qwer5678-pass')
device.press('KEYCODE_DPAD_ENTER', MonkeyDevice.DOWN_AND_UP)
The idea here is that if we run a simple app that has username and password fields, our automation script might get lucky and cause the application to submit data over the network, possibly using an HTTPS connection without validating SSL certificate chains. Here's a time-compressed video of our automated AVD in action:
And here's a time-compressed video of the connected CERT Tapioca machine. (View fullscreen HD to see the full details.)
As you can see in the last capture, the app under testing has transmitted data over an HTTPS connection without a valid SSL certificate chain. This application is flagged as vulnerable.
I've been performing this automated testing for a couple of weeks. It's currently testing only one application at a time, so it's moving relatively slowly. But because it doesn't require user interaction, it's not taking up any of my time. Automated testing is only catching the low-hanging fruit, but I've already discovered several hundred vulnerable applications.
Cleaning up the Mess
Failure of Android applications to validate SSL certificate chains is nothing new. Approximately two years ago, a paper called Why Eve and Mallory Love Android: An Analysis of Android SSL (In)Security was published. However, it appears that the authors of the paper did not notify any of the vulnerable application authors. If you don't know that the paper exists, you don't know that this problem exists.
Recently, FireEye published a blog post about SSL vulnerabilities in Android applications. It indicates "We notified the developers, who acknowledged the reported vulnerabilities and addressed them in subsequent versions of their applications." This statement makes it seem that the problems have already been fixed. Aside from a couple of case studies, it's not clear which applications were affected, which authors have been notified, and which application versions contain fixes. I applaud FireEye for their efforts, but I feel that we can take things a bit further.
Here is where CERT's work is providing value:
- We are performing a wide-scale automated dynamic test. Static analysis goes only so far. Just because an application looks like it may fail to check a certificate, that doesn't necessarily mean that the code is even used by the application. The false-positive rate for such analysis is likely high.
- We are notifying the author of every application that fails the dynamic testing described above. In our reports to these authors we include mallodroid static analysis output, a list of the URIs visited by the application while under test, and the mitmproxy log file produced by CERT Tapioca. We also include references to Google and CERT guidance for how to handle certificates in Android applications.
- We will list affected Android applications in the CERT vulnerability note for this issue, VU#582497.
Listing affected applications without necessarily giving the vendors our usual 45-day disclosure guideline may seem a bit odd. But if you consider the characteristics and attack vector of this class of vulnerability, it should make more sense:
- If an attacker is interested in performing MITM attacks, they're already doing it. That cat is already out of the bag. They've likely set up a rogue access point and are already capturing all of the traffic that passes through it. Further supporting this suspicion is the fact that the FTC has already filed charges against the authors of two mobile applications that fail to validate SSL certificates. Knowing which specific applications are affected does not give any advantage to an attacker.
- If end users have vulnerable applications on their phones, knowing which applications are affected does give an advantage to the defenders. They can choose to uninstall vulnerable applications until fixes are available, or if they must, they can choose to use said applications only on trusted networks.
Deciding which details to release and when to release them is a concern with any vulnerability that the CERT Division handles. However, in this case, it's clear that the disclosure of affected applications benefits the defenders and not the attackers.
We plan to update VU#582497 and any resources that the document uses as our testing and communication with application authors continue.
More By The Author
More In CERT/CC Vulnerabilities
PUBLISHED IN
CERT/CC VulnerabilitiesGet updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.
Subscribe Get our RSS feedMore In CERT/CC Vulnerabilities
Get updates on our latest work.
Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.
Subscribe Get our RSS feed