Reinforcement learning for mobile robot controllers: Theory and experiments
A fundamental challenge in robotics is controller design. While designing a robot's individual behaviors is simple, tuning the behaviors and designing a controller that can select among them is difficult. Typically, behaviors are assigned and tuned by a human programmer, but, for a realistic robot scenario, this is infeasible for several reasons. The robot's state space is likely to be extensive; subsequently, manual assignment and tuning can be time-consuming. Manual assignment requires extensive knowledge of the robot's scenario and environment; in the complex, dynamic situations in which robots are most useful, such knowledge is unlikely. Both manual assignment and behavior tuning are prone to errors, which can often result in failure. Enabling the robot to perform controller design itself can greatly increase its autonomy. This is the problem we address in this dissertation. Simply stated, how can we use machine learning to improve robot controller design? ^ We break down the problem of designing mobile robot controllers into three components: (1) Which behavior should be selected? (Given the state information, which behavior should be performed?) (2) Under what circumstances should that behavior be selected? (Where in the state space are the boundaries for each behavior?) (3) What are the specific details of that behavior? (What are the parameters that govern the behavior?) In this dissertation, we use machine learning techniques to address each of these components. To model our system, we use a hybrid systems approach that enables us to incorporate both discrete and continuous states as well as physical and emotional information. Furthermore, our modeling approach provides us with a framework upon which we can use reinforcement learning. ^ We use three reinforcement learning algorithms: an actor-critic reinforcement learning approach for learning which behavior to select, a hill-climbing approach for adjusting the parameters of a behavior as well as learning the behavior's boundaries in the state space, and clustering, which we use for learning the state space boundaries. We implement our approach in several different scenarios and demonstrate the improved functionality and decreased dependency of a robot equipped with the capability to learn. ^
Engineering, Mechanical|Engineering, Robotics
Meghann M Lomas,
"Reinforcement learning for mobile robot controllers: Theory and experiments"
(January 1, 2006).
Dissertations available from ProQuest.