What Ant Colonies Can Teach Us About Securing the Internet
PUBLISHED IN
Artificial Intelligence EngineeringIn cyber systems, the identities of devices can easily be spoofed and are frequent targets of cyber-attacks. Once an identity is fabricated, stolen or spoofed it may be used as a nexus to systems, thus forming a Sybil Attack. To address these and other problems associated with identity deception researchers at the Carnegie Mellon University Software Engineering Institute, New York University's Tandon School of Engineering and Courant Institute of Mathematical Sciences, and the University of Göttingen (Germany), collaborated to develop a deception-resistant identity management system inspired by biological systems; namely, ant colonies. This blog post highlights our research contributions.
Inspired by Biology
Within an ant colony, ants must be able to recognize other colony members as well as their functional roles to coordinate the many tasks they jointly perform. Important to these joint tasks is both coordination and cooperation. Thus, it is important for ants to detect invaders and non-cooperative ants. Ants solve coordination problems with a rich chemical signaling language, most commonly known to be used when they search, establish, and reinforce pheromone trails to food sources. Lesser known, but important to our project, is the chemical signaling that ants employ to verify cooperative colony members. To identify cooperative colony members, ants use cuticular hydrocarbons (CHCs) as a contact pheromone worn by each ant. This biophysical property, which enables members of a superorganism to interact effectively through chemical signaling, has inspired us to explore new ways to secure cyber-physical systems, particularly wireless ad hoc networks (WANETs), against identity deception attacks. To apply this natural concept to WANETs, we designed a technology called M-coin, a hard-to-counterfeit crypto-coin, to mitigate the damaging effects of deception in cyber-physical systems.
Technology Setting
WANETs are the primary communication system for mobile sensor systems, robots, and vehicles. They allow devices, for example two smart phones, to connect directly with each other without a centralized access point or predefined routes common to many networks. WANETs are a type of ad hoc communication system and are increasingly being used in ad hoc coalition networks, which are designed to share information among a set of organizations that, out of necessity, must establish trust relations quickly or under fluid conditions (e.g., in emergency response, security for major events, or humanitarian relief efforts). While the benefits of sharing information are obvious, it's important to note that before a steady state of reputation and trust is reached these communications systems are vulnerable environments with identity management and security being their major challenges. For example, emergency vehicles could use WANETs to obtain information from responders, such as photos and geolocations, to prepare and prioritize response in the wake of a natural disaster. First responders, soldiers, or emergency personnel can also use WANETS when deployed to environments where critical infrastructure is fragile or non-existent. In such situations, these systems must assure that nodes in the network behave in a trustworthy manner.
Furthermore, since identity deception attacks are cheap and easy to deploy, it is possible to overwhelm WANETs with Sybil attacks, which use low overhead computation to either steal or counterfeit or even, fabricate identities. By undermining identity, Sybil nodes enable untrustworthy actions, such as introducing malicious logic and data into the system, thereby degrading system quality.
To protect WANETS against Sybil attacks, our team of researchers focused on creating an identity management system capable of dynamically checking the trustworthiness of devices within the system. I, along with my CERT colleague Jose Andre Morales, partnered with Dr. Bud Mishra of New York University's Courant Institute of Mathematical Sciences and Parisa Memarmoshrefi and Ansgar Kellner of the University Göttingen. Together, our team has integrated in one common framework, extensive knowledge in game theory, bio-inspired security, WANETs, model checking, and machine learning.
Proper WANET functioning and security, as with ant colonies, relies on identity management: the identity of an organism as a member of a super-organism (colony), and the verification of that organism's role, is essential for cooperation. This kind of identity signaling is critical to the strategic interactions among the partially informed players, as in a classical non-cooperative game in which one usually assumes rationality.
Our collaborative research netted a solution to the problem of identity deception in WANETS derived from biologically-inspired algorithms that focuses on the bio-physical constraints of the Ant's mechanism for determining colony membership. These algorithms appear well-suited and have already been applied successfully in simulated networks where a set of mechanisms is shown to provide a strong deterrence for identity deception.
Foundations in Game Theory
As detailed in my previous blog post on related research, Deterrence for Malware, many of the ideas in our approach can be traced back to John von Neumann, a Princeton University mathematician. Von Neumann, along with his colleague Oskar Morgenstern, created the basic foundations of modern game theory, which studies how rational agents make strategic choices as they interact. An example of one such strategic choice is the concept of mutual assured destruction (MAD), the doctrine describing a war scenario in which two sides would annihilate each other, thereby removing the incentive for either side to start a war. Once the two sides have come to such a mutually self-enforcing strategy, neither party will deviate.
Such a state-of-affairs is described in game-theory by the concept of Nash equilibrium. Our approach aims to cast the cybersecurity problem in a game-theoretic setting so that all players will choose to be honest, will check that they are honest and not an unwitting host for malicious logic, and can prove to others that they are honest and accept the proofs that others are honest. As applied mathematicians, we devise ways to re-engineer these cyber systems in such a way that all agents have common knowledge that most rewards, and least risk, are gained by interacting in the honest way, not acting deceptively.
Our Approach: CHC Profiles and Diffusion
This project is not the first time the self-organizing nature of ant colonies has been applied to network-related matters. Ant colony properties have also been applied to data routing and distribution, and most well-known is the ant colony optimization (ACO) algorithm, which describes a general explore/exploit optimization method. Parisa Memarmoshrefi and Ansgar Kellner have done extensive research in this field. For a deeper dive into their research, I recommend the 2014 paper Social Insect-based Sybil Attack Detection in Mobile Ad-hoc Networks.
To set the stage, we first describe the game played by the ants using the CHC profile. One can think of the CHC profile as a fingerprint for the ant. Using this pheromone, a queen ant will tag members of the colony to mark them as members. As Memarmoshrefi and Kellner detailed in their paper, members of the same colony (or superorganism) have highly similar CHC profiles that are established by the queen. Diet and nesting materials also play a part in this tagging. Ants that lack the proper chemical ID may encounter aggressive outcomes and are deterred from entering the colony.
Likewise, our engineering approach outlines a new protocol that expresses the risk of costly signaling. We can imagine that if a member invests significant resources to obtain an identity, that member is less likely to use that identity deceptively, in particular when the investment is subject to a risk by the protocol. In ant colonies, ants obtain a chemical ID from a queen ant and other diffusion processes. This identity diffusion process is very interesting to us because, unlike the relatively static identity management systems such as public key infrastructure (PKI), the identity of a colony is dynamic and dependent on trust and how that trust is reinforced in previous interactions.
Our protocol supports this critical feedback link, and moreover allows the agents to affect the diffusion locally. In a sense, the local risk-reward tradeoffs, as made by individual agents, creates a dynamic global version of the network, which balances a risk-reward tradeoff associated with sharing information. It works even when identities are only partially verifiable.
Over the last century, computational sciences have advanced numerical methods to elucidate the role of diffusion in complex domains. Ants can be thought of as moving surfaces, with chemical diffusion processes amplified every time they come into contact with one another. The protocol we modeled constrains the communication in a way that mimics such diffusion for the transfer of keying materials along with the messages they pass. Specifically, we developed an agent-based diffusion model with the flow of keying material (or conductance in diffusion) dependent on the outcomes of authentication signaling games. When the diffusion process is subjacent to peer-to-peer (P2P) message passing in a network, we introduce a vastly complex dynamical system for flows of keying material. This model and its dynamics, we hope, will allow us to study how and what complex identity signaling strategies may emerge. While our game captured the constraints of the ant colony system, it does introduce a slight burden of information synchronization concerning identities and cooperative strategies. To address this burden, we devised M-Coin, a crypto currency similar to Bitcoin, which can provide distributed databases of interactions forming a reputation while increasing the learning rates for cooperative strategies.
By modeling such an agent-based game system, empirically studied through a series of simulation experiments, we seek to create an incentive package in the form of a shared database for cooperative strategies, thereby impacting system behavior. These experiments allowed nodes to optimize strategies including the possibility of developing deceptive strategies with identities in a virtual, agent-based model system where our protocol is tested. To explore the dynamics and effects of these signaling mechanisms, we considered a variety of similarity functions for colony membership and how it affected the utility optimization of agents. We considered similarity metrics that focused on inclusion of a given keying material, others based on loyalty (lacking the keying material of any other colony), and others based on aversion to a given keying substance. In the latter case, a colony would pre-agree upon a convention that imputes a negative reputation to it and mark it as a non-cooperative device.
While the various notions of similarity formed our minor control, the presence or absence of a distributed database for scoring strategic responses formed the major control. This major control can be thought of in two ways: first, as a natural means to share strategic information among agents, and second, as a group benefit package a strategic score data base is intended to improve the learning rates for cooperative devices despite repeated deceptive attacks. Moreover, the cooperative devices must operate in the open, thereby allowing any deceptive or non-cooperative strategic observations to be recorded in the shared database but not vice versa.
Wrapping Up and Looking Ahead
Our experiments indicated that what strongly deters deception (measured in the utility gain and/or loss of mutant deceptive strategies when compared to the bulk average of the cooperative types) is the existence of a cooperative group benefit package. Within the cooperative group benefit package, we use M-Coin to form a shared database of strategic information. Another important ingredient of control is the lifecycle (measured from mutation to abandonment) of deceptive strategy. Next, we attempted to identify just how much strategic information is necessary to share and how to do so in a distributed, ad hoc system. These results buoy our hope that this approach will soon lead to a safer Internet, especially as wireless ad hoc networking becomes more common in a wide range of applications including transportation, sensor networks, and robotic applications.
Our experiments demonstrated the difficulty in sustaining malicious identities in environments where agents can contest identity claimants with partial and similarity-based verification techniques. Our future work will focus on building an identity system that captures and deploys these mechanisms as robustly as ant colonies. This future work will include further study to identify the regulatory modes of the system and consider further the impact of physically and biologically inspired methods that may support joint decision-making and improve the trustworthiness of systems.
We welcome your feedback on this research in the comments section below.
Additional Resources
Read the paper Identity Deception and Game Deterrence via Signaling Games, which won best paper at the 9th EAI International Conference on Bio-inspired Information and Communications Technologies (formerly BIONETICS).
Watch a video describing our work.
More By The Author
More In Artificial Intelligence Engineering
PUBLISHED IN
Artificial Intelligence EngineeringGet updates on our latest work.
Sign up to have the latest post sent to your inbox weekly.
Subscribe Get our RSS feedMore In Artificial Intelligence Engineering
Get updates on our latest work.
Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.
Subscribe Get our RSS feed