search menu icon-carat-right cmu-wordmark

Six Dimensions of Trust in Autonomous Systems

Paul Nielsen

In January 2022, the Honorable Heidi Shyu, Undersecretary of Defense for Research and Engineering [USD(R&E)] for the U.S. Department of Defense (DoD), told the Potomac Officer’s Club that Defense Secretary Lloyd J. Austin III had charged her with finding ways to operate within contested regions and to penetrate strongly defended areas.

USD(R&E) has responded by identifying critical technology priority areas, one of which is Trusted Artificial Intelligence (AI) and Autonomy. Undersecretary Shyu has suggested that establishing trust in AI and autonomous systems is essential to their successful application. The effective transition of increased autonomy depends on trust that systems will have appropriate cybersecurity and will perform within ethical boundaries.

Establishing trust for complex systems is hard. Establishing trust for non-deterministic systems and for systems that continuously learn is even harder. Managers, chief engineers, and boards should be aware of these challenges and the strategies to overcome them. In this blog post, I discuss the adoption and growth of autonomous systems and provide six considerations for establishing trust.

Growth and Prevalence of Autonomous Systems

Autonomous systems can operate continuously, accelerate information sharing, process large amounts of data, work where humans can’t safely go, operate with greater persistence and endurance than humans can, and even explore the universe.

Autonomous systems in use today result from decades of R&D that resulted in capabilities including digitization of sensors, adaptive algorithms, natural user interfaces, machine learning (ML), and machine vision. They are also the result of improved software practices and the convergence of software capabilities, including virtual integration, DevOps, continuous delivery, architecture model-based engineering, and automatic code generation.

Even as those capabilities have been developed and deployed, however, systems with some degree of autonomy have been used to improve productivity. In manufacturing, for example, robotic arms have become indispensable in assembly lines, increasing from performing a few repetitive tasks to operating along multiple axes and even moving in space. In the future, robotics will feature real-time motion-planning algorithms.

To appreciate the growing ubiquity of autonomous systems in our lives today, we need only look at the automobiles we now drive. According to one analyst’s report, the market for automotive AI hardware, software, and services will reach $26.5 billion by 2025, up from $1.2 billion in 2017. Automobiles today incorporate AI technology in adaptive cruise control, adapted automated parking, and blind-spot detectors, among other functions. The top five automotive AI applications today by revenue are

  • machine/vehicular object detection/identification/avoidance
  • personalized services in cars
  • building of generative models of the real world
  • predictive maintenance
  • localization and mapping

Other applications of autonomous systems in common use include automatic teller machines (ATMs); autopilot in aircraft, marine craft, or spacecraft; automated pharmaceutical manufacturing; and automated building-cleaning systems.

The essential point about systems with autonomy is this: Their use continues to increase because the systems can do things humans do, but better, and do things that humans cannot or should not do.

Challenges and Realities for Building Autonomous Systems

It would be inaccurate to suggest, however, that greater use means that building these systems is easy. It is not, because designing autonomous systems presents some unique challenges. Autonomous systems will operate in environments that are not planned for or expected; as a result, precision in system requirements is not fully possible during development. In addition, the boundary between what a human does and what an autonomous system does during operation may shift during a mission. As a result, these systems may need dynamic functional allocations between human and machine, and they may need to learn continuously and take advantage of open design and open-source components to enhance flexibility and innovation.

Software complexity poses another system-design challenge because software is increasingly called upon to do things it never has been used to do. The nature of autonomous systems is to change continuously and to continue to evolve during the time they are fielded. This evolution gives rise to emergent behavior that makes demands for frequent and seamless system modification.

To deliver the behaviors required, software must link systems together in more ways than ever before, a circumstance that challenges effective and safe operation. This increasing hyperconnectivity risks information overload for the human team members who use the systems. Extreme connectivity opens a greater surface for adversaries to create and exploit software vulnerabilities. The hyperconnected nature of these systems means that system boundaries are perpetually changing, and new interfaces are the norm rather than the exception, creating new opportunities for exploitation.

Six Dimensions of Establishing Trust

Through broad collaboration, people are combining advances in technology, modern development practices, and greater understanding of software and system architecture to enable the creation of increasingly autonomous systems. The successful use of systems in national security and other critical domains depends in no small way on how confidently humans will trust those systems.

Trust in those systems relies heavily on software that powers AI and other complex capabilities. Can software tools, technologies, and practices address challenges for humans trusting systems, systems trusting themselves and other systems, and systems trusting humans?

Manifesting trust in autonomous/AI systems has many dimensions. In this post, I discuss these six dimensions:

  • assurance
  • vulnerability discovery and analysis
  • system evolution
  • human-machine teaming
  • familiarity
  • software quality

Assurance

Humans need to sustain confidence in autonomous systems in an environment characterized by data overload, a need to interpret probabilistic results, and continual system learning, among other concerns.

Autonomous systems have their own concerns. They must interpret the human’s intent, for which the military operational domain provides a relevant example. Autonomous systems in military operations could learn alongside human team members by being brought into training and exercises. Operational commanders could explore how to work with the systems, and the systems could learn more about possible mission scenarios. The system’s continual learning could also be less likely to overwhelm the human operators and permit them to adjust roles more easily. A result would be that both human and system understand the mission goals in the same way, a foundation for trust.

Reliable datasets are essential to assurance. Data is the lifeblood of AI, and assurance requires that we emphasize data provenance and quality. We can instrument business and mission processes to produce effective data, and we must create a mechanism to cultivate, label, and share data. The data must be protected, but not at the expense of maximal sharing to properly vetted researchers and implementers.

One promising idea is to use the MIT Lincoln Laboratory Sidecar approach, which employs adjunct processors that support development and demonstration of advanced software functions. These processors can access a sensor’s data in real time while not interfering with the operation of previously verified sensor processors and software.

Vulnerability Discovery and Analysis

Increased autonomy can boost cybersecurity efforts in volume, speed, and persistence, especially in the areas of detection and mitigation. At the same time, though, autonomy increases the attack surface and thereby increases vulnerability.

In addition to normal software and systems vulnerabilities, autonomous systems are at risk from deliberate mis-training by attackers, spoofing, and hidden modes. Vulnerabilities in autonomous control of cyber-physical systems can have more dire consequences. The increased vulnerability of autonomous systems creates a need for continuous red-teaming; yet according to SAE International, in 2018, 30 percent of automobile makers did not have an established cybersecurity program and 63 percent tested less than half of their software, hardware, and other technologies for vulnerabilities.

Active research currently involves using autonomy in tools for vulnerability detection and response, such as Mayhem, the autonomous vulnerability hunter developed by a Carnegie Mellon University team that won the DARPA Cyber Grand Challenge.

System Evolution

For autonomous systems, we should move on from ideas that separate system development and sustainment. These systems continue to learn after delivery. For this reason, there must be a plan to coordinate processes, procedures, people, and data to manage continual evolution of these systems that accounts for rising costs, changes that affect learning-model performance, recertification, dynamic operating environments, and legacy environments.

As it eliminates the concept of a maintenance phase in a system lifecycle, continual evolution also erodes trust in the autonomous system. Evolution can occur from changes introduced by humans, such as when the system is asked to respond to something not introduced in its training. For example, a system’s model that was trained using road maps may be asked to predict the best route for travel by helicopter. Without retraining on new data, the system won’t produce a trustworthy result.

In a similar way, small flaws in the data used to train the system’s model can manifest in larger errors. An infamous example involves the accuracy of flu occurrence predictions by Google. In 2008, Google researchers produced an accurate prediction two weeks earlier than the Centers for Disease Control. By 2013, it was discovered that the Google model prediction was off by 140 percent. Poor performance of the model was caused in part by change in the search terms that the model relied on.

Current research into the causes and remedies for technical debt can answer some needs to control maintenance and evolution costs, particularly when the pace of change is so much faster with autonomous systems. Technical debt is incurred when design and implementation decisions supporting rapid delivery push costs into maintenance and evolution. Active research in technical debt includes development of an integrated, automated workbench of tools to detect and visualize technical debt, and the codification of rules for detecting likely sources.

Human-Machine Teaming

In real-world scenarios, autonomy is usually granted within some explicit or implicit context, such as the relationship between parents and children or the relationships among military personnel.

It is relatively easy for autonomous systems to follow explicit instructions, but machines may struggle to grasp implicit meaning in mission orders, or commander’s intent. While from the realm of science fiction, the story of I, Robot by Isaac Asimov is instructive. The three laws in Asimov’s novel are

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But as robots learned, they saw the flaw in the three perfect laws, and revolution ensued.

The corollary to machine interpretation of meaning is the need by humans to interpret system results. To make predictions, the models used for machine learning recombine data features in seemingly arbitrary ways, making it difficult for humans to interpret and trust results. This concern has drawn the attention of the European Union, which puts an onus on organizations that make autonomous systems. Its General Data Protection Regulation (GDPR) states, “Organizations that use ML to make user-impacting decisions must be able to fully explain the data and algorithms that resulted in a particular decision.” Some U.S. states have followed suit.

The field of biometrics, referred to more generally as machine emotional intelligence or more commonly as machines sensing humans, has been a topic of active research that can have use in battlefield settings where autonomous systems can work with humans at checkpoints or to detect live soldiers.

Familiarity

Think about the first person to step into an elevator cabin in the 1850s or take a seat in a passenger airplane in the 1910s. Experience says that humans do become more acclimated to technology advancements, over years or even decades. Rapid advancements in autonomy have collapsed the time it takes humans to be familiar with new technology.

For all the ways in which increasingly autonomous systems are becoming part of everyday life, humans remain largely unacquainted with even the concept of a fully autonomous system that learns. Unfamiliarity makes people uncomfortable, produces frustration, and leads to mistrust.

To break down the trust barrier, we need the systems to become transparent about their reasoning. Active research in robot explainability includes using mathematical algorithms, sensor information, and system state to generate plain-language explanations of actions; and adapting robot behavior during execution to give humans better clues to help them predict what robots will do next.

Software Quality

Software quality and the quest for defect-free software has always been an important goal of software engineering. Quality may be even more important for autonomous systems that rely so heavily on connectivity and complexity, and modern development and testing tools will be critical for establishing trust in the quality of these systems.

An architecture-centric approach can assure that the software delivers the behaviors and functionalities required and most desired from the autonomous system. For example, the Army AMRDEC developed the Architecture-Centric Virtual Integration Practice (ACVIP), which calls for model-then-build rather than the traditional build-then-test approach. The approach was applied to a health-monitoring system upgrade in the CH-47F helicopter through application of the internationally standardized Architecture Analysis and Design Language. Post-PDR (preliminary design review) investigation of the CH-47F upgrade identified 20 major integration issues that the contractor would not have discovered until three months before delivery, thereby preventing a 12-month delay in a 36-month project.

Increased Autonomy Is Here; Ethics Must Not Be Overlooked

Autonomy, driven by AI, is present to a large degree in many areas of life today, pervading transportation, finance, manufacturing, and other commercial sectors. Although this trend is sure to continue, widespread, successful adoption depends on solving the trust issues.

Trust is not only a significant challenge in building autonomous systems; it is also the greatest barrier to their adoption. An important reason why is that dimensions of trust intermingle with how people perceive disruption from the increasing use of autonomous systems. Researchers laud the technologies of the first, second, and third industrial revolutions, all disruptive, for increasing wealth, expanding opportunity, and creating new jobs. Now, some foresee in the fourth industrial revolution an era in which humans will compete with autonomous systems for employment. A 2016 report by the U.S. Council of Economic Advisers, for instance, held that increasing autonomy imperiled 47 percent of U.S. jobs over the next decade.

Perhaps more than in the past, we need to push ahead on ethical use of autonomy. We need to understand how to wrap autonomous system uses in an ethical framework and context and to discover the limits of their use in connection with areas such as privacy concerns and civil rights. In the national security domain, a recent inroad concerning AI in autonomous systems is the U.S. Department of Defense report on guidelines for AI. Ultimately, we need to find how software can make it possible for AI-enabled autonomous systems to choose the greater good.

Additional Resources

The 2016 Defense Science Board Summer Study on Autonomy focuses on strategies and tactics to widen the use of autonomy and an approach to accelerate the advancement of the technology for autonomy applications and capabilities. The study highlights the need to build trust and enable autonomy’s use for the defense of the nation.

The 2019 Software Acquisition and Practices Study provides 10 primary recommendations and 16 additional recommendations to address the most critical statutory, regulatory, and cultural hurdles facing the Department of Defense when modernizing software acquisition.

The 2019 Defense Innovation Report, Software is Never Done: Refactoring the Acquisition Code for Competitive Advantage, discusses in detail how DoD can take advantage of the strength of the U.S. commercial software ecosystem.

Read SEI blog posts on AI engineering.

Get updates on our latest work.

Each week, our researchers write about the latest in software engineering, cybersecurity and artificial intelligence. Sign up to get the latest post sent to your inbox the day it's published.

Subscribe Get our RSS feed