search menu icon-carat-right cmu-wordmark

How Do You Trust AI Cybersecurity Devices?

Headshot of Grant Deffenbaugh Headshot of Shing-hon Lau.
CITE

The artificial intelligence (AI) and machine learning (ML) cybersecurity market, estimated at $8.8 billion in 2019, is expected to grow to more than $38 billion by 2026. Vendors assert that AI devices, which augment traditional rules-based cybersecurity defenses with AI or ML techniques, better protect an organization’s network from a wide array of threats. They even claim to defend against advanced persistent threats, such as the SolarWinds attack that exposed data from major companies and government agencies.

But AI cybersecurity devices are relatively new and untested. Given the dynamic, sometimes opaque nature of AI, how can we know such devices are working? This blog post describes how we seek to test AI cybersecurity devices against realistic attacks in a controlled network environment.

Photo illustration of the exterior of an office building, with large windows showing the offices inside. A series of interconnected nodes overlays the photo.

The New Kid

AI cybersecurity devices often promise to guard against many common and advanced threats, such as malware, ransomware, data exfiltration, and insider threats. Many of these products also claim not only to detect malicious behavior automatically, but also to automatically respond to detected threats. Offerings include systems designed to operate on network switches, domain controllers, and even those that utilize both network and endpoint information.

The rise in popularity of these devices has two major causes. First, there is a significant deficit of trained cybersecurity personnel in the United States and across the globe. Organizations bereft of the necessary staff to handle the plethora of cyber threats are looking to AI or ML cybersecurity devices as force multipliers that can permit a small team of qualified staff to defend a large network. AI or ML-enabled systems can perform large volumes of tedious, repetitive labor at speeds not possible with a human workforce, freeing up cybersecurity staff to handle more complicated and consequential tasks.

Second, the speed of cyber attacks has increased in recent years. Automated attacks can be completed at near-machine speeds, rendering human defenders ineffective. Organizations hope that automatic responses from AI cybersecurity devices can be swift enough to defend against these ever-faster attacks.

The natural question is, “How effective are AI and ML devices?” Due to the size and complexity of many modern networks, this is a hard question to answer, even for traditional cybersecurity defenses that employ a static set of rules. The inclusion of AI and ML techniques only makes it harder. These factors make it challenging to assess whether the AI behaves correctly over time.

The first step to determining the efficacy of AI or ML cybersecurity devices is to understand how they detect malicious behavior and how attackers might exploit the way they learn.

How AI and ML Devices Work

AI or ML network behavior devices take two different primary approaches to identifying malicious behavior.

Pattern Identification

Pre-identified patterns of malicious behavior are created for the AI network behavior device to detect and match against the system’s traffic. The device will tune the threshold levels of benign and malicious traffic pattern identification rules. Any behavior that exceeds those thresholds will generate an alert. For example, the device might alert if the volume of disk traffic exceeds a certain threshold in a 24-hour period. These devices act similarly to antivirus systems: they are told what to look for, rather than learn it from the systems they protect, though some devices may also incorporate machine learning.

Anomaly Detection

The devices continually learn the traffic of the system and attempt to identify abnormal behavior patterns from a predetermined past time period. Such anomaly detection systems can easily detect, for example, the sudden appearance of an IP address or a user logging in after-hours for the first time. For the most part, the device learns unsupervised and does not require labeled data, reducing the amount of labor for the operator.

The downside to these devices is that if a malicious actor has been active the entire time the system has been learning, then the device will classify the actor’s traffic as normal.

A Common Vulnerability

Both pattern identification and anomaly detection are vulnerable to data poisoning: adversarial injection of traffic into the learning process. On its own, an AI or ML device cannot detect data poisoning, which affects the device’s ability to accurately set threshold levels and determine normal behavior.

A clever adversary could use data poisoning to attempt to move the decision boundary of the ML techniques inside the AI device. This method could allow the adversary to evade detection by causing the device to identify malicious behavior as normal. Moving the decision boundary the other direction could cause the device to classify normal behavior as malicious, triggering a denial of service.

An adversary could also attempt to add back doors to the device by adding specific, benign noise patterns to the background traffic on the network, then including that noise pattern in subsequent malicious activity. The ML techniques may also have inherent blind spots that can be identified and exploited by the adversary.

Testing Efficacy

How can we determine the effectiveness of AI or ML cybersecurity devices? Our approach is to directly test the efficacy of the device against actual cyber attacks in a controlled network environment. The controlled environment ensures that we do not risk any actual losses. It also permits a great deal of control over every element of the background traffic, to better understand the conditions under which the device can detect the attack.

It is well known that ML systems can fail by learning, doing, or revealing the wrong thing. While executing our cyber attacks, we can attempt to seek blind spots in the AI or ML device, try to adjust its decision boundary to evade detection, or even poison the training data of the AI with noise patterns so that it fails to detect our malicious network traffic.

We seek to address multiple issues, including the following.

  • How quickly can an adversary move a decision boundary? The speed of this movement will dictate the rate at which the AI or ML device must be retested to verify that it is still able to complete its mission objective.
  • Is it possible to create backdoor keys given remediations to this activity? Such remediations include adding noise to the training data and filtering the training data to only specific data fields. With these countermeasures in place, can the device still detect attempts to create backdoor keys?
  • How thoroughly does one need to test all the possible attack vectors of a system to assure that (1) the system is working properly and (2) there are no blind spots that can be successfully exploited?

Our Artificial Intelligence Defense Evaluation (AIDE) project, funded by the Department of Homeland Security’s Cybersecurity and Infrastructure Security Agency, is developing a methodology for testing AI defenses. In early work, we developed a virtual environment representing a typical corporate network and used the SEI-developed GHOSTS framework to simulate user behaviors and generate realistic network traffic. We tested two AI network behavior analysis products and were able to hide malicious activity by using obfuscation and data poisoning techniques.

Our ultimate objective is to develop a broad suite of tests, consisting of a spectrum of cyber attacks, network environments, and adversarial techniques. Users of the test suite could determine the conditions under which a given device is successful and where it may fail. The test results could help users decide whether the devices are appropriate for protecting their networks, inform discussions of the shortcomings of a given device, and help determine areas where the AI and ML techniques can be improved.

To accomplish this goal, we are creating a test lab where we can evaluate these devices using actual network traffic that is realistic and repeatable by simulating the humans behind the traffic generation and not simulating the traffic itself. In this environment, we will play both the attackers, the red team, and the defenders, the blue team, and measure the effects on the learned model of the AI or ML devices.

If you are interested in this work or would like to suggest specific network configurations to simulate and evaluate, we are open to collaboration. Write us at info@sei.cmu.edu.

CITE

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed