Adaptive Autonomy as a Means for Implementing Shared Ethics in Human-AI Teams
• Conference Paper
Publisher
Software Engineering Institute
Abstract
Rapid increases in artificial intelligence technologies are resulting in closer, more symbiotic interactions between artificially intelligent agents and human beings in a variety of contexts. One of those contexts that has only recently been receiving much attention is human-AI teaming, where an AI agent operates an interdependent teammate, rather than a tool. These teams present unique challenges in creating and maintaining shared team understanding, specifically when it comes to a shared team ethical code. Because teams change in composition, goals, and environments it is imperative that AI teammates be capable of updating their ethical codes in concert with their human teammates. This paper proposes a two-part model in order to implement a dynamic ethical code for AI teammates in human-AI teams. The first part of the model proposes that the ethical code, in addition to its team role, be used to inform an adaptive AI-agent of when and how to adapt its level of autonomy. The second part of the model explains how that ethical code is consistently updated based upon the AI agent’s iterative observations of team interactions. This model makes multiple contributions to the community of human-centered computing, because teams with higher levels of team cognition exhibit higher levels of performance and longevity. More importantly, it proposes a model for more ethical use of AI teammates on human-AI teams that is applicable to a variety of human-AI teaming contexts and permits room for future innovation.
Part of a Collection
Proceedings of the AAAI Spring Symposium on AI Engineering, 2022
AI Engineering Assets