Date of this Version
As observers move through the environment or shift their direction of gaze, the world moves past them. In addition, there may be objects that are moving differently from the static background, either rigid-body motions or nonrigid (e.g., turbulent) ones. This dissertation discusses several models for motion perception. The models rely on first measuring motion energy, a multi-resolution representation of motion information extracted from image sequences.
The image flow model combines the outputs of a set of spatiotemporal motion-energy filters to estimate image velocity, consonant with current views regarding the neurophysiology and psychophysics of motion perception. A parallel implementation computes a distributed representation of image velocity that encodes both a velocity estimate and the uncertainty in that estimate. In addition, a numerical measure of image-flow uncertainty is derived.
The egomotion model poses the detection of moving objects and the recovery of depth from motion as sensor fusion problems that necessitate combining information from different sensors in the presence of noise and uncertainty. Image sequences are segmented by finding image regions corresponding to entire objects that are moving differently from the stationary background.
The turbulent flow model utilizes a fractal-based model of turbulence, and estimates the fractal scaling parameter of fractal image sequences from the outputs of motion-energy filters. Some preliminary results demonstrate the model's potential for discriminating image regions based on fractal scaling.
David J. Heeger, "Models for Motion Perception", . September 1987.
Date Posted: 24 April 2008