Occlusions as a Guide for Planning the Next View
To resolve the ambiguities that are caused by occlusions in images, we need to take sensor measurements from several different views. The task addressed in this paper deals with a strategy for acquiring 3-D data of an unknown scene. We must first answer the question: What knowledge is adequate to perform a specific task? Thinking in the spirit of purposive vision, to accomplish its task, a system does not need to understand the complete scene but must be able to recognize patterns and situations that are necessary for accomplishing the task. We have limited ourselves to range images obtained by a light stripe range finder. A priori knowledge given to the system is the knowledge of the sensor geometry. The foci of attention are occluded regions, i.e., only the scene at the borders of the occlusions is modeled to compute the next move. Since the system has knowledge of the sensor geometry, it can resolve the appearance of occlusions by analyzing them. The problem of 3-D data acquisition is divided in two subproblems due to two types of occlusions. An occlusion arises either when the reflected laser light does not reach the camera or when the directed laser light does not reach the scene surface. After taking the range image of a scene the regions of no data due to the first kind of occlusion are extracted. The missing data are acquired by rotating the sensor system in the scanning plane, which is defined by the first scan. After a complete image of the surface illuminated from the first scanning plane has been built, the regions of missing data which are due to the second kind of occlusions are located. Then the directions of the next scanning planes for further 3-D data acquisition are computed.