A Theory for Multiresolution Signal Decomposition: The Wavelet Representation
Files
Penn collection
General Robotics, Automation, Sensing and Perception Laboratory
Degree type
Discipline
Subject
Computer Sciences
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
It is now well admitted in the computer vision literature that a multi-resolution decomposition provides a useful image representation for vision algorithms. In this paper we show that the wavelet theory recently developed by the mathematician Y. Meyer enables us to understand and model the concepts of resolution and scale. In computer vision we generally do not want to analyze the images at each resolution level because the information is redundant. After processing the signal at a resolution r0, it is more efficient to analyze only the additional details which are available at a higher resolution rl. We prove that this difference of information can be computed by decomposing the signal on a wavelet orthonormal basis and that it can be efficiently calculated with a pyramid transform. This can also be interpreted as a division of the signal in a set of orientation selective frequency channels. Such a decomposition is particularly well adapted for computer vision applications such as signal coding, texture discrimination, edge detection, matching algorithms and fractal analysis.