search menu icon-carat-right cmu-wordmark

Can We Trust Our Cyber-Physical Systems?

Can We Trust Our Cyber-Physical Systems?
Article

August 12, 2020—Kicking up a cloud of dust, a drone takes off to gather overhead video—a typical intelligence, reconnaissance, and surveillance task. The drone flies a wide circle around its operator and disappears over the horizon. Can we trust it to safely complete its mission?

We can ask this question about any cyber-physical system: ground vehicles, robots, weapons, manned aircraft, ships, and all computer-based systems that interact with their physical environment. New cyber-physical systems are constantly being developed and deployed that enable the Department of Defense to respond and adapt to an ever-evolving array of threats. Being able to enforce safe behavior will speed up the deployment and use of new technologies and increase trust in the systems military personnel depend on. How can we validate and verify that these systems do not endanger their users?

The SEI’s Dio de Niz is leading a team of researchers to develop the rapid certifiable trust approach: a lightweight, scalable method to rapidly validate whether cyber-physical systems are behaving safely. Rapid certifiable trust focuses on how systems and components act, not on their internal algorithms. “For a cyber-physical system, safe behavior means safe actions at the correct time, for instance, to avoid a crash,” said de Niz.

Rapid certifiable trust techniques could be incorporated into any type of cyber-physical system to improve its safety: autonomous, semiautonomous, remotely teleoperated, or directly controlled by human beings. Rapid certifiable trust does not require access to the source code of components. Instead, it verifies that their output and corresponding behavior are safe. An enforcer monitors these outputs to ensure that they do not violate safety constraints, which are determined by verification models based on physics, logic, and timing.

  • Physics models verify the interaction between software and physical components, including the system’s physical properties such as mass, velocity, and torque.
  • Logical models ensure that the code computes the correct values.
  • Timing models guarantee that the values are produced at the right time, for example, to correct the behavior of the physical components before a crash can occur.

This published verification framework can be used to model specific problems in cyber-physical systems and, using the accompanying algorithms, verify their safe condition. While the models provide irrefutable mathematical proof of correctness, they are difficult to scale up in size. To use them with large, real-world systems, rapid certifiable trust applies these methods only to smaller enforcers that are specifically designed to monitor safety-critical properties. The enforcers operate in a real-time, mixed-trust computing system: verified, trusted components enforce unverified, untrusted components. A tamperproof, verified hypervisor protects the enforcers from accidental or malicious modification by the unverified code. This hypervisor is available as open source software.

As a proof-of-concept, de Niz and his team are currently implementing rapid certifiable trust techniques on a Navy system. Future plans include transitioning it to a deployed system and investigating how it interacts with autonomy, supporting the National Defense Strategy’s focus on modernizing advanced autonomous systems. Artificial intelligence would be a strong candidate for the rapid certifiable trust approach.

This story first appeared in the 2019 SEI Year in Review. To learn more about this and other topics discussed in the Year in Review, visit 2019 SEI Year in Review Resources.

Photo: U.S. Army