Diversity-Augmented Intrinsic Motivation for Deep Reinforcement Learning

Tianhong Dai* (Corresponding Author), Yali Du, Meng Fang, Anil Anthony Bharath

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In many real-world problems, reward signals received by agents are delayed or sparse, which makes it challenging to train a reinforcement learning (RL) agent. An intrinsic reward signal can help an agent to explore such environments in the quest for novel states. In this work, we propose a general end-to-end diversity-augmented intrinsic motivation for deep reinforcement learning which encourages the agent to explore new states and automatically provides denser rewards. Specifically, we measure the diversity of adjacent states under a model of state sequences based on determinantal point process (DPP); this is coupled with a straight-through gradient estimator to enable end-to-end differentiability. The proposed approach is comprehensively evaluated on the MuJoCo and the Arcade Learning Environments (Atari and SuperMarioBros). The experiments show that an intrinsic reward based on the diversity measure derived from the DPP model accelerates the early stages of training in Atari games and SuperMarioBros. In MuJoCo, the approach improves on prior techniques for tasks using the standard reward setting, and achieves the state-of-the-art performance on 12 out of 15 tasks containing delayed rewards.
Original languageEnglish
Pages (from-to)396-406
Number of pages11
JournalNeurocomputing
Volume468
Early online date2 Nov 2021
DOIs
Publication statusPublished - 11 Jan 2022

Keywords

  • Deep Reinforcement Learning
  • Curiosity-driven exploration
  • Determinantal point process

Fingerprint

Dive into the research topics of 'Diversity-Augmented Intrinsic Motivation for Deep Reinforcement Learning'. Together they form a unique fingerprint.

Cite this