Departmental Papers (ESE)

Abstract

This paper presents a framework for visual servoing that guarantees convergence to a visible goal from almost every initially visible configurations while maintaining full view of all the feature points along the way. The method applies to first- and second-order fully actuated plant models. The solution entails three components: a model for the "occlusion-free" configurations; a change of coordinates from image to model coordinates; and a navigation function for the model space. We present three example applications of the framework, along with experimental validation of its practical efficacy.

Document Type

Journal Article

Subject Area

GRASP, Kodlab

Date of this Version

August 2002

Comments

Copyright 2002 IEEE. Reprinted from IEEE Transactions on Robotics and Automation, Volume 18, Issue 4, August 2002, pages 521-533.

This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

NOTE: At the time of publication, Daniel Koditschek was affiliated with the University of Michigan. Currently, he is a faculty member of the School of Engineering at the University of Pennsylvania.

Keywords

dynamics, finite field of view (FOV), navigation functions, obstacle avoidance, occlusions, vision-based control, visual servoing

Share

COinS
 

Date Posted: 13 March 2008

This document has been peer reviewed.