BEYOND FRAMES: LEARNING TO PERCEIVE WITH EVENT-BASED VISION
Degree type
Graduate group
Discipline
Electrical Engineering
Subject
Neuromorphic Vision
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
Conventional frame-based vision systems face inherent limitations in high-speed, high-dynamic-range settings, where motion blur, latency, and temporal aliasing degrade performance. Event cameras—neuromorphic sensors that asynchronously report per-pixel brightness changes—offer a fundamentally different visual signal that is sparse, low-latency, and temporally precise. These properties create new algorithmic challenges and opportunities for fundamental vision tasks such as reconstruction, tracking, and segmentation. In addition, their robustness to adversarial lighting and motion creates unique advantages for high-speed robotic tasks. This thesis presents a collection of methods that advance event-based 3D perception in segmentation, 3D reconstruction, and tracking: EvAC3D introduces a temporally continuous 3D reconstruction framework using apparent contour events and a novel voxel carving algorithm for mesh generation. EV-Catcher demonstrates low-latency trajectory estimation for intercepting fast-moving objects in real time, combining compact binary event representations with a confidence-aware neural architecture. Un-EVIMO proposes an unsupervised approach to independent motion segmentation in event space, leveraging ego-motion field consistency to generate supervisory signals without ground-truth labels. ContinuityCam explores the reconstruction of photorealistic videos from a single image and an event stream, learning temporally coherent frame synthesis via implicit flow and latent color representations. EvHuman presents the first feed-forward method for continuous-time human mesh recovery from events, using a time-implicit neural motion prior. Collectively, these contributions build a unified foundation for temporally continuous, event-driven visual understanding across multiple abstraction levels. This work advances the use of neuromorphic vision in computer vision, enabling fast, robust, and low-power perception for applications in robotics and beyond.