Date of this Version
Recent advances in computing have enabled fast reconstructions of dynamic scenes from multiple images. However, the efficient coding of changing 3D-data has hardly been addressed. Progressive geometric compression and streaming are based on static data sets which are mostly artificial or obtained from accurate range sensors. In this paper, we present a system for efficient coding of 3D-data which are given in forms of 2 + 1/2 disparity maps. Disparity maps are spatially coded using wavelets and temporally predicted by computing flow. The resulted representation of a 3D-stream consists then of spatial wavelet coefficients, optical flow vectors, and disparity differences between predicted and incoming image. The approach has also very useful by-products: disparity predictions can significantly reduce the disparity search range and if appropriately modeled increase the accuracy of depth estimation.
Date Posted: 30 October 2006