Saliency Based Opportunitstic Search for Object Part Extraction and Labeling
Penn collection
Degree type
Discipline
Subject
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
We study the task of object part extraction and labeling, which seeks to understand objects beyond simply identifiying their bounding boxes. We start from bottom-up segmentation of images and search for correspondences between object parts in a few shape models and segments in images. Segments comprising different object parts in the image are usually not equally salient due to uneven contrast, illumination conditions, clutter, occlusion and pose changes. Moreover, object parts may have different scales and some parts are only distinctive and recognizable in a large scale. Therefore, we utilize a multi-scale shape representation of objects and their parts, figural contextual information of the whole object and semantic contextual information for parts. Instead of searching over a large segmentation space, we present a saliency based opportunistic search framework to explore bottom-up segmentation by gradually expanding and bounding the search domain.We tested our approach on a challenging statue face dataset and 3 human face datasets. Results show that our approach significantly outperforms Active Shape Models using far fewer exemplars. Our framework can be applied to other object categories.