icon-carat-right menu search cmu-wordmark

Improving Disaster Response with the xView2 Challenge

Created July 2020 • Updated March 2025

In 2018, the Department of Defense’s Defense Innovation Unit (DoD’s DIU), together with the Software Engineering Institute (SEI) and several humanitarian assistance and disaster recovery (HADR) organizations, launched the xView2 Challenge to create accurate and efficient machine learning (ML) models that can lead to increased efficiency and safety during disaster response.

Often first on the scene after a disaster, the DoD is invested in improving the analysis of imagery of disaster areas to assist with HADR efforts. To support this, the xView2 Challenge brought together participants who trained ML models to swiftly and accurately find and classify damaged buildings using high-resolution satellite images from a wide spectrum of natural disasters.

An SEI team worked with the DIU team to create the challenge; the baseline ML models; a building damage scale; and xBD, a database of satellite photos with human-labeled building damage, against which competitors’ ML results were judged.

In 2024, the SEI began developing a full-stack web application that enables mission users to fine-tune custom objection detection models trained on satellite imagery. This tool, which enhances the existing xView2 efforts, will increase the scale and speed of providing HADR for natural disasters, world events, and all battle damage assessment efforts.

Assessing Building Damage: Slow, Difficult, and Dangerous

The need for the xView2 Challenge arose because effective disaster response depends on quickly and accurately assessing the situation in the affected area. Before responders can act, they need to know the location, cause, and severity of damage to homes, schools, hospitals, businesses, and other structures.

Effective disaster response requires immediate assessments of building damage. In-person assessments are the most accurate but also the most dangerous because they require people on the ground to directly inspect damaged structures during or immediately after a disaster.

Damage assessments from satellite imagery could supplement or even replace in-person assessments. However, while satellites provide unmatched overhead views of a disaster area, this raw imagery is not enough to guide response and recovery efforts.

High-resolution images are needed to spot damaged and destroyed structures. But because disasters cover large areas, analysts must search through huge numbers of images to localize and classify building damage. This annotated imagery must then be summarized and communicated to the recovery team. The entire process is time-consuming, laborious, and prone to human error. On top of that, analysts can face storage, bandwidth, and memory challenges when processing satellite data and communicating their results. The xView2 Challenge explored how to assess disaster damage by using computer vision algorithms to analyze satellite imagery.

Improving Disaster Response with the xView2 Challenge

Our Collaborators

  • Defense Innovation Unit (DIU), Department of Defense (DoD)
  • Carnegie Mellon University School of Computer Science
  • Joint Artificial Intelligence Center (JAIC)
  • National Aeronautics and Space Administration (NASA)
  • National Geospatial-Intelligence Agency (NGA)
  • California Department of Forestry and Fire Protection (CAL FIRE)
  • California Air National Guard
  • Federal Emergency Management Agency (FEMA)
  • United States Geological Survey (USGS)
  • California Governor's Office of Emergency Services (CalOES)
  • National Security Innovation Network (NSIN)
  • UC Berkeley AI Research Lab
  • CrowdAI, Inc.
  • Australian Geospatial Intelligence Organisation (AGO)

The xView 2 Challenge: Using ML to Assess Building Damage from Satellite Imagery

When expert human analysts examine satellite imagery after a disaster, they apply contextual knowledge about the geography of the area and the weather or disaster conditions. xView2 Challenge participants replicated this process by applying deep learning and other sophisticated computer vision techniques to a labeled dataset of satellite imagery before and after disasters, including wildfires, landslides, dam collapses, volcanic eruptions, earthquakes, tsunamis, storms, and floods. Their goal was to create the fastest and most accurate algorithms to detect and assess building damage.

The competition resulted in xView2, a machine learning system that analyzes satellite imagery to classify damage to structures.

The xBD Dataset

For the xView2 Challenge, SEI researchers built a large-scale dataset of labeled satellite images called xBD. All competitors used the dataset, which was created specifically for the competition. It’s one of the biggest, most comprehensive, and highest quality public datasets of annotated, high-resolution satellite imagery.

The xBD dataset contains “before” and “after” satellite images from disasters around the world. The SEI worked with disaster response experts to create the Joint Damage Scale, an annotation scale that accurately represents real-world damage conditions. Every image in the xBD dataset is labeled with building outlines, damage levels, and satellite metadata. The dataset also includes bounding boxes and labels for such environmental factors as fire, water, and smoke.

The xBD dataset is available for public use under a Creative Commons license.

aside1_4070
Improving Disaster Response with the xView2 Challenge

After the Competition

In the years since the competition, the ML models developed by the SEI as a result of the xView2 Challenge, have been used to assess building damage in the wake of wildfires in Australia and California. According to Chief Manuel Villalba, who used the xView2 prototype and is a California National Guard intelligence analyst specializing in wildfires, “Generally speaking, an analyst would take an entire day or two to clear a large fire area containing hundreds of structures. According to our testing, it’s possible [xView2] could assess it in less than 10-20 minutes.”

At DIU’s request, the SEI has built off the xView2 efforts. In 2024, the SEI began developing a prototype web application that allows mission users to upload existing satellite imagery, add damage labels, and train custom ML model adapters that offer higher accuracy when used to run inference on new and unlabeled satellite imagery. The workflow provides a massive gain in predictive power, reduces time spent by humans manually labeling satellite data, and scales efficiently to allow multiple users to work together in providing better HADR to save human lives.

Learn More

Deep Learning and Satellite Imagery: DIUx Xview Challenge

Newsletter

This February 6, 2019 SEI Bulletin talks about the xView 2018 Detection Challenge.

Read