icon-carat-right menu search cmu-wordmark

Learning by Observing via Inverse Reinforcement Learning

Video
This SEI Cyber Talk episode explains how inverse reinforcement learning can be effective for teaching agents to perform complex tasks with many states and actions.
Publisher

Software Engineering Institute

Watch

Abstract

This SEI Cyber Talk episode explains how inverse reinforcement learning can be effective for teaching agents to perform complex tasks with many states and actions.

Inverse reinforcement learning (IRL) is a formalization of imitation learning, which involves learning a task by observing how it is done. The difference between IRL and simple imitation learning is that, in addition to taking note of the actions and decisions needed to perform a task, IRL also associates those actions with the intrinsic rewards of taking them. By doing so, IRL can teach agents to apply the decisions it makes when performing certain actions to other states that the agent might not yet have observed. Ritwik Gupta and Eric Heim give an overview of how this learning concept works, and they discuss the potential of using IRL to develop technologies such as self-driving cars as well as some of its limitations.

SEI Cyber Talks are also available on Apple Podcasts https://podcasts.apple.com/us/podcast/id1455386915 , SoundCloud https://soundcloud.com/cmu-sei-cybertalks, and Spotify.