Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation

Tianhong Dai* (Corresponding Author), Kai Arulkumaran* (Corresponding Author), Tamara Gerbert, Samyakh Tukra, Feryal Behbahani, Anil Anthony Bharath

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Downloads (Pure)

Abstract

Deep reinforcement learning (DRL) has the potential to train robots to perform complex tasks in the real world without requiring accurate models of the robot or its environment. However, agents trained with these algorithms typically lack the explainability of more traditional control methods. In this work, we use a combination of out-of-distribution generalisation tests and post hoc interpretability methods in order to understand what strategies DRL-trained agents use to perform a reaching task. To do so, we train agents under different conditions, using comparison to better interpret both quantitative and qualitative results; this allows us to not only provide local explanations, but also broad categorisations of behaviour. A key aim of our work is to understand how agents trained with visual domain randomisation (DR)—a technique which allows agents to generalise from simulation-based-training to the real world—differ from agents trained without. Our results show that the primary outcome of DR is more robust, entangled representations, accompanied by greater spatial structure in convolutional filters. Furthermore, even with an improved saliency method introduced in this work, we show that qualitative studies may not always correspond with quantitative measures, necessitating the combination of inspection tools in order to provide sufficient insights into the behaviour of trained agents. We conclude with recommendations for applying interpretability methods to DRL agents.
Original languageEnglish
Pages (from-to)143-165
Number of pages23
JournalNeurocomputing
Volume493
Early online date19 Apr 2022
DOIs
Publication statusPublished - 7 Jul 2022

Keywords

  • Deep reinforcement learning
  • Generalisation
  • Interpretability
  • Saliency

Fingerprint

Dive into the research topics of 'Analysing Deep Reinforcement Learning Agents Trained with Domain Randomisation'. Together they form a unique fingerprint.

Cite this