D-REX: Ranking-Based Reward Extrapolation without Rankings

Abstract: The performance of imitation learning is typically upper-bounded by the performance of the demonstrator. Recent empirical results show that imitation learning via ranked demonstrations allows for better-than-demonstrator performance; however, ranked demonstrations may be difficult to obtain, and little is known theoretically about when such methods can be expected to outperform the demonstrator. To address these issues, we first contribute a sufficient condition for when better-than-demonstrator performance is possible and discuss why ranked demonstrations can contribute to better-than-demonstrator performance. Building on this theory, we then introduce Disturbance-based Reward Extrapolation (D-REX), a ranking-based imitation learning method that injects noise into a policy learned through behavioral cloning to automatically generate ranked demonstrations. By generating rankings automatically, ranking-based imitation learning can be applied in traditional imitation learning settings where only unlabeled demonstrations are available. We empirically validate our approach on standard MuJoCo and Atari benchmarks and show that D-REX can utilize automatic rankings to significantly surpass the performance of the demonstrator and outperform standard imitation learning approaches. D-REX is the first imitation learning approach to achieve significant extrapolation beyond the demonstrator’s performance without additional side-information or supervision, such as rewards or human preferences.

Keywords: Imitation learning, Reward learning, Ranked demonstrations

Previous work:

Previous thread:

How this could be applied to autonomous driving (I think):

  1. Tesla (or another company) collects lots of human driving demonstrations from, say, traversing urban intersections.

  2. Tesla does behavioural cloning/supervised imitation learning with these demonstrations. The Autopilot agent learns a policy.

  3. Tesla objects an increasing amount of noise into the policy, producing increasingly worse demonstrations in simulation or in structured testing on private roads. It now has an automatically ranked dataset of demonstrations.

  4. The Autopilot agent learns a reward function from these ranked demonstrations and can now extrapolate beyond the best human demonstrations.

  5. The learned reward can then be used for reinforcement learning, however that takes place: in simulation, in structured testing, or fleet-wide (probably off-policy). The learned reward can also be combined with other rewards, like time or distance between human interventions.