Hi folks, Allen Householder here. In my previous post, I introduced our recent work in surveying vulnerability discovery for emerging networked systems (ENS). In this post, I continue with our findings from this effort and look at the differences between ENS and traditional computing in the context of vulnerability discovery, analysis, and disclosure.
What's Different About Emerging Networked Systems?
In our review of recent security research that focused on vulnerability discovery in ENS, we identified a number of key differences between ENS and traditional computing and mobile platforms, including
Limited instrumentation: The vulnerability analyst's ability to instrument the system in order to test its security can be limited. Many of the systems comprise embedded devices that are effectively black boxes at the network level. On the surface, this limitation might appear to be beneficial to the security of the system; if it's hard to create an analysis environment, it might be difficult to find vulnerabilities in the system. However, the problem is that while a determined and/or well-resourced attacker can overcome such obstacles and get on with finding vulnerabilities, a lack of instrumentation can make it difficult even for the vendor to adequately test the security of its own products.
Less familiar system architectures: ENS architectures are often different from those most often encountered by the typical vulnerability analyst. In short, ARM is neither x86 nor IA64 and some embedded systems are neither. Although this limitation is trivially obvious at a technical level, many vulnerability researchers and analysts will have to overcome this skill gap if they are to remain effective at finding and remediating vulnerabilities in ENS.
Limited user interfaces: User interfaces on the devices themselves are extremely limited -- a few LEDs, maybe some switches or buttons, and that's about it. Thus, significant effort can be required just to provide input or get the feedback needed to perform security analysis work.
Proprietary protocols: The network protocols used above the transport layer are often proprietary. Although the spread of HTTP/HTTPS continues in this space as it has in the traditional and mobile spaces, there are many extant protocols that are poorly documented or wholly undocumented. The effort required to identify and understand higher level protocols, given sometimes scant information about them, can be daunting. Techniques and tools for network protocol inference and reverse engineering can be effective tactics. However, if vendors were more open with their protocol specifications much of the need for that effort would be obviated.
Lack of updatability: Unlike most other devices (laptops, PCs, smartphones, tablets), many ENS are either non-updateable or require significant effort to update. Systems that cannot be updated become less secure over time as new vulnerabilities are found and novel attack techniques emerge. Because vulnerabilities are often discovered long after a system has been delivered, systems that lack facilities for secure updates once deployed present a long-term risk to the networks in which they reside. This design flaw is perhaps the most significant one already found in many ENS, and if not corrected across the board, could lead to years if not decades of increasingly insecure devices acting as reservoirs of infection or as platforms for lateral movement by attackers of all types.
Lack of security tools: Security tools used for prevention, detection, analysis, and remediation in traditional computing systems have evolved and matured significantly over a period of decades. And while in many cases similar concepts apply to ENS, the practitioner will observe a distinct gap in available tools when attempting to secure or even observe such a system in detail. Packet capture and decoding, traffic analysis, reverse engineering and binary analysis, and the like are all transferable as concepts if not directly as tools, yet the tooling is far weaker when you get outside of the realm of Windows and Unix-based (including OSX) operating systems running on x86/IA64 architectures.
Vulnerability scanning tool and database bias: Vulnerability scanning tools largely look for known vulnerabilities. They, in turn, depend on vulnerability databases for their source material. However, databases of known vulnerabilities -- CVE, the National Vulnerability Database (NVD), the Open Source Vulnerability Database (OSVDB), Japan Vulnerability Notes (JVN) and the CERT Vulnerability Notes Database to name a few -- are heavily biased by their history of tracking vulnerabilities in traditional computing systems (e.g., Windows, Linux, OSX, Unix and variants). Recent conversations with these and other vulnerability database operators indicate that the need to expand coverage into ENS is either a topic of active investigation and discussion or a work already in progress. However, we can expect the existing gap to remain for some time as these capabilities ramp up.
Inadequate threat models: Overly optimistic threat models are de rigueur among ENS. Many ENS are developed with what can only be described as naive threat models that drastically underestimate the hostility of the environments into which the system will be deployed. (Undocumented threat models are still threat models, even if they only exist in the assumptions made by the developer.) Even in cases where the developer of the main system is security-knowledgeable, he or she often is composing systems out of components or libraries that may not have been developed with the same degree of security consideration. This weakness is especially pernicious in power- or bandwidth-constrained systems where the goal of providing lightweight implementations supersedes the need to provide a minimum level of security. We believe this is a false economy that only defers a much larger cost when the system has been deployed, vulnerabilities are discovered, and remediation is difficult.
Third-party library vulnerabilities: We observe pervasive use of third-party libraries with neither recognition of nor adequate planning for how to fix or mitigate the vulnerabilities they inevitably contain. When a developer embeds a library into their system, that system can inherit vulnerabilities subsequently found in the incorporated code. Although this is true in the traditional computing world, it is even more concerning in contexts where many libraries wind up as binary blobs and are simply included in the firmware as such. Lacking the ability to analyze this black box code either in manual source code reviews or using most code analysis tools, vendors may find it difficult to examine the code's security.
Unprepared vendors: Often we find that ENS vendors are not prepared to receive and handle vulnerability reports from outside parties, such as the security researcher community. Many also lack the ability to perform their own vulnerability discovery within their development lifecycle. These difficulties tend to arise from one of two causes: (a) the vendor is comparatively small or new and has yet to form a product security incident response capability or (b) the vendor has deep engineering experience in their domain but has not fully incorporated the effect of network-enabling their devices into their engineering quality assurance (this is related to the inadequate threat model point above). Typically vendors in the latter group may have very strong skills in safety engineering or regulatory compliance, yet their internet security capability is lacking. Our experience is that many ENS vendors are surprised by the vulnerability disclosure process. We frequently find ourselves having conversations that rehash two decades of vulnerability coordination and disclosure debates with vendors who appear to experience something similar to the Kubler-Ross stages of grief (denial, anger, bargaining, depression, and acceptance) during the process.
Unresolved vulnerability disclosure debates: If we have learned anything in 26 years of vulnerability coordination at the CERT Coordination Center it is that there is no single right answer to vulnerability disclosure discussions. However, in the traditional computing arena, most vendors and researchers have settled into a reasonable rhythm of allowing the vendor some time to fix vulnerabilities prior to publishing a vulnerability report more widely. Software as a service (SAAS) and software distributed through app stores can often fix and deploy patches to most customers quickly. On the opposite end of the spectrum, we find many ENS and embedded device vendors for whom fixing a vulnerability might require a firmware upgrade or even physical replacement of affected devices. This diversity of requirements forces vendors and researchers alike to reconsider their expectations with respect to the timing and level of detail provided in vulnerability reports based on the systems affected. Coupled with the proliferation of ENS vendors who are relative novices at internet-enabled devices and just becoming exposed to the world of vulnerability research and disclosure, the shift toward ENS can be expected to reinvigorate numerous disclosure debates as the various stakeholders work out their newfound positions.
Although vulnerability discovery for ENS has much in common with security research in traditional computing and mobile environments, there are a number of important distinctions outlined in this post. The threats posed by these systems given their current proliferation trajectory are concerning.
Even as they become more common, it can be difficult to identify the threats posed to a network by ENS either alone or in aggregate. In the simplest sense one might think of it as a "hidden Linux" problem: How many devices can you find in your immediate vicinity containing have some form of Linux? Do you know what their patch status is? Do you know how you'd deal with a critical vulnerability affecting them? Furthermore, while the hidden Linux problem isn't going away any time soon, we believe the third-party library problem will long outlast it. How many vulnerable image parsers with a network-accessible attack vector share your home with you? How would you patch them?
[A]n advanced persistent threat, one that is difficult to discover, difficult to remove, and difficult to attribute, is easier in a low-end monoculture, easier in an environment where much of the computing is done by devices that are deaf and mute once installed or where those devices operate at the very bottom of the software stack, where those devices bring no relevant societal risk by their onesies and twosies, but do bring relevant societal risk at today's extant scales much less the scales coming soon.
We anticipate that many of the current gaps in security analysis tools and knowledge will begin to close over the next few years. However it may be some time before we can fully understand how the systems already available today, let alone tomorrow, will impact the security of the networks onto which they are placed. The scope of the problem does not appear to contract any time soon.
We already live in a world where mobile devices outnumber traditional computers, and ENS stand to dwarf mobile computing in terms of the sheer number of devices within the next few years. As vulnerability discovery tools and techniques evolve into this space, so must our tools and processes for coordination and disclosure. Assumptions built into many vulnerability handling processes about disclosure timing, coordination channels, development cycles, scanning, patching, etc., will need to be reevaluated in the light of hardware-based systems that are likely to dominate the future internet.
The views expressed in this article are those of the author and do not reflect the official policy or position of the Department of Defense or the U. S. Government
Have you ever been developing or acquiring a system and said to yourself, I can't be the first architect to design this type of system. How can I tap into the architecture knowledge that already exists in this domain? If...