Navigation In Depth Images
The ability to faithfully represent and reason about fine geometric detail enables robot motion and manipulation planning in confined or cluttered spaces. However, the capacity to represent fine detail with accuracy and precision can limit scalability by inducing overhead commensurate with expressivity. This thesis describes a technique for building large scale atlases of inter-connected, high detail maps by constructing a pose graph whose nodes are annotated with wide-angle panoramic depth images. These images are fast to build and update, efficiently represent fine geometric detail, and provide a natural structuring mechanism by which to decompose a large map into loosely-coupled pieces.
We demonstrate simultaneous localization and mapping indoors and outdoors, with the sensor carried by a walking human, a running human, a walking quadrupedal robot, a wheeled robot, a flying quadcopter, and an automobile on public roads. Tracking is maintained by a spinning lidar sensor with a 32° vertical field of view while undergoing instantaneous rolling rotations of greater than 80°/s. The environments considered range from caves and tunnels, to dense forests, to cluttered lab spaces. The mapper proves itself capable of identifying loop closures over kilometers-long traversals with metric errors of less than one percent. Additionally, mapper performance is evaluated in the context of embedded GPUs, and demonstrated to run substantially faster than real time, while requiring storage on the order of 100 kilobytes-per-meter-traveled to retain detailed maps with a large spatial extent.