icon-carat-right menu search cmu-wordmark

Context-Aware Computing in the DoD

Jeff Boleng

In their current state, wearable computing devices, such as glasses, watches, or sensors embedded into your clothing, are obtrusive. Jason Hong, associate professor of computer science at Carnegie Mellon University, wrote in a 2014 co-authored article in Pervasive Computing that while wearables gather input from sensors placed optimally on our bodies, they can also be "harder to accommodate due to our social context and requirements to keep them small and lightweight."

For soldiers in battle or emergency workers responding to contingencies, seamless interaction with wearable devices is critical. No matter how much hardware soldiers wear or carry, it will be of no benefit if they have to stop what they are doing to interact while responding to enemy fire or another emergency situation. This blog post describes our joint research with CMU's Human Computer Interaction Institute (HCII)to understand the mission, role, and task of individual dismounted soldiers using context derived from sensors on their mobile devices and bodies to ensure they have the needed information and support.

A Model for Context-Aware Computing

In beginning this research, we partnered with Dr. Anind Dey, an early pioneer in context-aware computing and director of HCII. Dey wrote one of the seminal papers on contextual computing, "Understanding and Using Context," while completing his doctoral work at Georgia Institute of Technology. At HCII, Dey researches sensors and mobile technology to develop tools and techniques for understanding and modeling human behavior, primarily within the domains of health, automobiles, sustainability, and education.

Our collaboration with Dey and other HCII researchers aims to use burgeoning computing capabilities that are either worn on users' bodies and/or tied to users' smartphones through the infrastructure of a cloud. We want to ensure this technology works with the user in an unobtrusive way and anticipates the user's informational and other needs.

Helping Soldiers on the Battlefield

Through our joint research effort, we developed a framework and a data model that involved codifying a soldier's role and task within the context of a larger group mission. We then examined data streams available from sensors on smart phones that soldiers use in the field. We augmented our experimentation with other body-worn sensors.

Take the example of an explosives disposal technician sent to investigate an unexploded ordnance. As the technician approaches it, his or her smartphone or wearable device, sensing the location, would automatically disable all wireless communications used by the technician, and any of those in use by nearby soldiers, that could trigger the ordnance. Other nearby wearable devices or smartphones that remain a safe distance away may then send notifications to other soldiers in the unit to retreat to a standoff distance.

Another scenario might involve a wearable camera. As the technician approaches an explosive device, a wearable camera, such as Google Glass, could conduct object detection and recognition. The camera would then, ideally, provide information, such as type of device, amount of yield, type of fuse, or whether the device is similar to one that had been previously defused. The camera may even provide common disabling techniques without the soldier having to scroll through options or issue commands.

Knowing a soldier's mission, role, and task, our framework incorporates all the sensory data--including audio, video, and motion sensors--and then delivers information that, ideally, is the most appropriate for a soldier in a given scenario.

We then extended the framework for a group context because soldiers and first responders almost always work in teams. A group of soldiers or emergency responders have a mission, and, based on that mission, they all have roles they have to fulfill (for instance, radio, gunner, or commander). Based on those roles and that mission, they all have a number of tasks they must complete.

We developed a framework and mobile device prototype that would share context among all the handheld devices used by a team working on a mission to help the team coordinate tasks most effectively.

Testing Our Framework in the (Paintball) Field

Each of the smartphones or wearable devices that we experimented with had, on average, between eight and 12 raw sensor streams tracking

  • temperature
  • barometric pressure
  • location data from GPS
  • received signal strength from various local wireless networks
  • inertial measurement unit (IMU) measured by six-axis devices that not only detail whether a user is moving up, down, left, or right, but which also provide accelerations rates for each plane

Next, we designed several scenarios that were representative of small-unit tactics, everything from an ambush to a patrol to a coordinated retreat. We decided to test our scenarios in paintball sessions because the feedback mechanism (the sting of being struck by a paintball) provided enough incentive for the volunteers (who were drawn from the 911th Airlift Wing) to react realistically. At an outdoor paintball course, we attached sensors to the bodies of our volunteers. We then filmed the volunteers in scripted scenarios and recorded the sensory data streams.

Relying on activity recognition research, we then took the data from a dozen high-bandwidth (high-sampling-rate) streams for each volunteer and determined, based on those sensor streams, the activity of the volunteer and also the larger context the group was performing. This work is another area in which we are collaborating with Dey.

We tested two approaches:

  • Taking the individual sensor streams and recognizing each individual activity (leveraging machine learning), then looking at the activities performed by each volunteer to try to infer a group activity.
  • Taking all the raw sensor feeds from all of the volunteers directly into the machine-learning algorithms. This approach allowed us to jump from raw data to the understanding that, for example, a group of soldiers is under attack or retreating without first exploring the individual context.

Currently, we are examining the raw sensory data captured during the paintball exercises and labeling it. By so doing, we are trying to determine, for example, whether an individual was running, retreating, or part of an entire group that was being ambushed and returning fire. With the video capture providing a ground truth of their activity, we run the raw sensory data in every combination we can think of through the machine-learning algorithms and try to determine which sensor streams (and combination of sensor streams) are most accurately able to predict an individual's activity.

For example, in some instances we needed to combine the inertial measurement unit data from the leg and the head to accurately predict the individual's position. The benefit of using machine-learning algorithms is that they are classifiers and relatively opaque to data. We can combine any set of sensor streams to the classifiers and compare it to the labeled data for accuracy. This comparison allows us to effectively determine which sensor streams are most applicable to recognize each type of individual and group activity.

We conducted three paintball exercises in 2014 and, using the vast amount of data we recorded, developed trained models. We then applied these trained models to help recognize individual and group activity.

Our objective was to use all those data streams to infer an individual's activity or physical position and then determine how those inferences relate to the individual's pre-defined set of roles and tasks. For example, it is not enough to know that someone's role is "disposal technician"; it is also beneficial to provide particular phases of the mission. For example, is our disposal technician getting suited up? If so, it would be of value to provide background information on the layout of the site that the technician is traveling to.

Once a technician arrives at the ordnance, he or she would require different information:

  • What type of device am I looking at?
  • How big should my safety cordon be?
  • How do I dismantle the device?
  • How do I warn people?
  • Am I able to warn people?

Results and Challenges

We conducted three rounds of paintball exercises, and our results improved with each exercise. In our first two exercises, with certain combinations of the sensors, we were able to identify exaggerated behaviors for an individual (i.e., running or falling) with 90 percent accuracy.

While other researchers have conducted similar work in this field, our objective was to recognize group behavior in addition to individual behavior. Our aim is to determine whether a squad is under attack before a member of the squad must spend precious seconds to radio that information back to a command center or supporting forces. Knowing immediately when soldiers in a squad have come under fire will allow the forward operating base to deploy support seconds or even minutes earlier.

One challenge we currently face is that models trained for a general population do not perform as accurately for specific individuals. Once we apply a general model to an individual, one of our future challenges is to learn the quirks of a particular person, for example, if an individual runs with a slightly different gait from everyone else in the world. Our objective is onboard continuous learning so that the model becomes personalized to an individual. This personalization will enable us to do highly accurate activity recognition and information delivery.

Another challenge that we faced--and we are not the first people to have this challenge--is to simplify the logistics of working with human volunteers. Simply coordinating volunteers, sensors, wireless data gathering, and cameras proved hard. We learned that it worked best to be very specific in our communications: laying out a strict agenda and informing volunteers of the agenda.

After the exercises, we also faced massive amounts (hundreds of gigabytes) of data including video and raw sensor data. We wrote several utilities that automate distillation of the data. We also wrote some apps that would pull the data off the cameras automatically during an experiment and archive them on a local hard drives. In addition, we wrote a remote control app for the cameras on the Android phones that were distributed around the paintball field.

Looking Ahead

Soldiers today carry much of the same gear their predecessors have carried for the last several decades (a radio, water, personal protective gear, ammunition, and weapons). Currently, soldiers pay a price to have computing on the battlefield. The hardware is heavy, the batteries are heavier still, and battery life is not optimal. Right now, the benefits of computing have not been sufficiently beneficial for soldiers and first responders to pay the price to carry the weight and deal with the added complexity.

Our work in this area will continue until wearable computers are less intrusive, especially for soldiers, and they bring benefits and an information advantage that clearly offsets the added weight and complexity.

We welcome your feedback on our research in the comments section below.

Additional Resources

To listen to the podcast SEI-HCII Collaboration Explores Context-Aware Computing for Soldiers, which provides additional details on the SEI's research collaboration with Dr. Dey's group, please click here.

To read "Understanding and Using Context" by Anind Dey, please visit
https://www.cc.gatech.edu/fce/ctk/pubs/PeTe5-1.pdf.

To learn about the research of the Advanced Mobile Systems Initiative, please visit
http://www.sei.cmu.edu/about/organization/softwaresolutions/mobile-systems.cfm.

CITE

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed