Departmental Papers (CIS)

Date of this Version

11-17-2022

Document Type

Conference Paper

Abstract

Long horizon robot learning tasks with sparse rewards pose a significant challenge for current reinforcement learning algorithms. A key feature enabling humans to learn challenging control tasks is that they often receive expert intervention that enables them to understand the high-level structure of the task before mastering low-level control actions. We propose a framework for leveraging expert intervention to solve long-horizon reinforcement learning tasks. We consider option templates, which are specifications encoding a potential option that can be trained using reinforcement learning. We formulate expert intervention as allowing the agent to execute option templates before learning an implementation. This enables them to use an option, before committing costly resources to learning it. We evaluate our approach on three challenging reinforcement learning problems, showing that it outperforms state-of-the-art approaches by two orders of magnitude.

Subject Area

CPS Safe Autonomy, CPS Machine Learning

Publication Source

6th Conference on Robot Learning

Keywords

Sample-Efficient Reinforcement Learning, Expert Intervention, Options, Planning with Primitives

Share

COinS
 

Date Posted:28 December 2022

This document has been peer reviewed.