EVENT-BASED VISION SYSTEMS: BRIDGING BIOLOGICAL INSPIRATION AND ROBOTIC PERCEPTION
Degree type
Graduate group
Discipline
Subject
Learning
Neuromorphic
Structured Light
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
Through exploring the relationship between biological vision systems and artificial event-based cameras, this research delves into fundamental aspects of motion perception and structural understanding within dynamic environments. The study draws inspiration from the diverse and highly parallel visual systems across the animal kingdom, emphasizing the significance of visual processing in animals, from simple insects to more complex beings. Biologic-inspired vision served as the basis for investigating silicon implementations of retinas, specifically focusing on event-based cameras that emulate retinal functions. These cameras offer high dynamic range, low latency, and reduced power consumption. This shift necessitates novel algorithms and a reevaluation of conventional computer vision techniques to accommodate the sparse nature of the data generated. Central to the research is the adaptation of biological concepts to fundamental computer vision problems. Spiking Neural Networks (SNNs) mimic biological neural networks and provide a computational framework for developing efficient, robust vision systems. Hard wired visual priors for complex illumination achieve efficiency seen in phototactic sub-systems across the animal kingdom. Drawing inspiration from biology for not only solutions to tasks seen in nature, but also for solutions to tasks not used in nature allows for exploitation of the advantages of event-based camera systems. This research encompasses several technical contributions: (i) learning scene structure from event streams, employing planar parallax for regularization and metric representation of scene depth, (ii) a technique for reconstructing 3D objects using key events, illustrating the unique advantages of event-based cameras, (iii) the development of a shallow spiking neural network architecture for efficient and robust motion computation, leveraging synaptic delay processes in SNNs, (iv) an examination of modern high-resolution event-based cameras, highlighting the challenges and potentials in semantic-rich decision making, (v) a structured light pipeline that offers low-latency high throughput depth estimation with a high resolution event-based camera. In summary, this research bridges the gap between biological vision systems and artificial visual technologies, providing insight into the critical properties of vision processing in the natural world and paving the way for advanced robotic and mobile applications with event-based cameras.