Departmental Papers (MEAM)

Document Type

Conference Paper

Subject Area

GRASP

Date of this Version

October 2003

Comments

Copyright © 2003 IEEE. Reprinted from Proceedings of the 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003), volume 1, pages 406-411.
Publisher URL: http://ieeexplore.ieee.org/xpl/tocresult.jsp?isNumber=27983&page=4

This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

Abstract

Robot programmers can often quickly program a robot to approximately execute a task under specific environment conditions. However, achieving robust performance under more general conditions is significantly more difficult. We propose a framework that starts with an existing control system and uses reinforcement feedback from the environment to autonomously improve the controller’s performance. We use the Policy Gradient Reinforcement Learning (PGRL) framework, which estimates a gradient (in controller space) of improved reward, allowing the controller parameters to be incrementally updated to autonomously achieve locally optimal performance. Our approach is experimentally verified on a Cye robot executing a room entry and observation task, showing significant reduction in task execution time and robustness with respect to un-modelled changes in the environment.

Keywords

robotics, controllers, programming, Policy Gradient Reinforcement Learning (PGRL)

Share

COinS
 

Date Posted: 15 October 2004

This document has been peer reviewed.