Misspecification in Inverse Reinforcement Learning
AAAI• 2023
Abstract
The aim of Inverse Reinforcement Learning (IRL) is to infer a reward function
from a policy . To do this, we need a model of how relates to
. In the current literature, the most common models are optimality,
Boltzmann rationality, and causal entropy maximisation. One of the primary
motivations behind IRL is to infer human preferences from human behaviour.
However, the true relationship between human preferences and human behaviour is
much more complex than any of the models currently used in IRL. This means that
they are misspecified, which raises the worry that they might lead to unsound
inferences if applied to real-world data. In this paper, we provide a
mathematical analysis of how robust different IRL models are to
misspecification, and answer precisely how the demonstrator policy may differ
from each of the standard models before that model leads to faulty inferences
about the reward function . We also introduce a framework for reasoning
about misspecification in IRL, together with formal tools that can be used to
easily derive the misspecification robustness of new IRL models.