Extending Linear System Models to Characterize the Performance Bounds of a Fixating Active Vision System
General Robotics, Automation, Sensing and Perception Laboratory
If active vision systems are to be used reliably in practical applications, it is crucial to understand their limits and failure modes. In the work presented here, we derive, theoretically and experimentally, bounds on the performance of an active vision system in a fixation task. In particular, we characterize the tracking limits that are imposed by the finite field of view. Two classes of target motion are examined: sinusoidal motions, representative for targets moving with high turning rates, and constant-velocity motions, exemplary for slowly varying target movements. For each class of motion, we identify a linear model of the fixating system from measurements on a real active vision system and analyze the range of target motions that can be handled with a given field of view. To illustrate the utility of such performance bounds, we sketch how the tracking performance can be maximized by dynamically adapting optical parameters of the system to the characteristics of the target motion. The originality of our work arises from combining the theoretical analysis of a complete active vision system with rigorous performance measurements on the real system. We generate repeatable and controllable target motions with the help of two robot manipulators and measure the real-time performance of the system. The experimental results are used to verify or identify a linear model of the active vision system. A major difference to related work lies in analyzing the limits of the linear models that we develop. Active vision systems have been modeled as linear systems many times before, but the performance limits at which the models break down and the system loses its target have not attracted much attention so far. With our work we hope to demonstrate how the knowledge of such limits can be used to actually extend the performance of an active vision system.