Augmenting Reinforcement Learning with Behavior Primitives for Diverse Manipulation Tasks
ICRA• 2022
Abstract
Realistic manipulation tasks require a robot to interact with an environment
with a prolonged sequence of motor actions. While deep reinforcement learning
methods have recently emerged as a promising paradigm for automating
manipulation behaviors, they usually fall short in long-horizon tasks due to
the exploration burden. This work introduces Manipulation Primitive-augmented
reinforcement Learning (MAPLE), a learning framework that augments standard
reinforcement learning algorithms with a pre-defined library of behavior
primitives. These behavior primitives are robust functional modules specialized
in achieving manipulation goals, such as grasping and pushing. To use these
heterogeneous primitives, we develop a hierarchical policy that involves the
primitives and instantiates their executions with input parameters. We
demonstrate that MAPLE outperforms baseline approaches by a significant margin
on a suite of simulated manipulation tasks. We also quantify the compositional
structure of the learned behaviors and highlight our method's ability to
transfer policies to new task variants and to physical hardware. Videos and
code are available at https://ut-austin-rpl.github.io/maple