Makadia, Ameesh
Email Address
ORCID
Disciplines
2 results
Search Results
Now showing 1 - 2 of 2
Publication Fully Automatic Registration of 3D Point Clouds(2006-01-01) Makadia, Ameesh; Patterson, Alexander; Daniilidis, KostasWe propose a novel technique for the registration of 3D point clouds which makes very few assumptions: we avoid any manual rough alignment or the use of landmarks, displacement can be arbitrarily large, and the two point sets can have very little overlap. Crude alignment is achieved by estimation of the 3D-rotation from two Extended Gaussian Images even when the data sets inducing them have partial overlap. The technique is based on the correlation of the two EGIs in the Fourier domain and makes use of the spherical and rotational harmonic transforms. For pairs with low overlap which fail a critical verification step, the rotational alignment can be obtained by the alignment of constellation images generated from the EGIs. Rotationally aligned sets are matched by correlation using the Fourier transform of volumetric functions. A fine alignment is acquired in the final step by running Iterative Closest Points with just few iterations.Publication Planar Ego-Motion Without Correspondences(2005-01-01) Makadia, Ameesh; Gupta, Dinkar; Daniilidis, KostasGeneral structure-from-motion methods are not adept at dealing with constrained camera motions, even though such motions greatly simplify vision tasks like mobile robot localization. Typical ego-motion techniques designed for such a purpose require locating feature correspondences between images. However, there are many cases where features cannot be matched robustly. For example, images from panoramic sensors are limited by nonuniform angular sampling, which can complicate the feature matching process under wide baseline motions. In this paper we compute the planar ego-motion of a spherical sensor without correspondences. We propose a generalized Hough transform on the space of planar motions. Our transform directly processes the information contained within all the possible feature pair combinations between two images, thereby circumventing the need to isolate the best corresponding matches. We generate the Hough space in an efficient manner by studying the spectral information contained in images of the feature pairs, and by re-treating our Hough transform as a correlation of such feature pair images.