The Importance of Non-Markovianity in Maximum State Entropy Exploration
ICMLFeb 7, 2022Outstanding Paper
In the maximum state entropy exploration framework, an agent interacts with a
reward-free environment to learn a policy that maximizes the entropy of the
expected state visitations it is inducing. Hazan et al. (2019) noted that the
class of Markovian stochastic policies is sufficient for the maximum state
entropy objective, and exploiting non-Markovianity is generally considered
pointless in this setting. In this paper, we argue that non-Markovianity is
instead paramount for maximum state entropy exploration in a finite-sample
regime. Especially, we recast the objective to target the expected entropy of
the induced state visitations in a single trial. Then, we show that the class
of non-Markovian deterministic policies is sufficient for the introduced
objective, while Markovian policies suffer non-zero regret in general. However,
we prove that the problem of finding an optimal non-Markovian policy is
NP-hard. Despite this negative result, we discuss avenues to address the
problem in a tractable way and how non-Markovian exploration could benefit the
sample efficiency of online reinforcement learning in future works.