Neural Compression: Estimating and Achieving the Fundamental Limits
Degree type
Graduate group
Discipline
Computer Sciences
Subject
lattice coding
neural compression
rate-distortion
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
Neural compression, which pertains to compression schemes that are learned from data using neural networks, has emerged as a powerful approach for compressing real-world data. Neural compressors often outperform classical schemes, especially in settings where reconstructions that are perceptually similar to the source are desired. Despite their empirical success, the fundamental principles governing how neural compressors operate, perform, and trade off performance with complexity are not well-understood compared to classical schemes. We aim to develop some of the fundamental principles of neural compression. We first introduce neural estimation methods that can estimate the theoretical rate-distortion limits of lossy compression for high dimensional sources using techniques from generative models. These methods illustrate that recent neural compressors are sub-optimal. Next, we build on these insights to discuss neural compressors that approach optimality yet remain low-complexity through the use of lattice coding techniques. These are shown to approach the rate-distortion limits on high-dimensional sources without incurring a significant increase in complexity. Finally, we develop low-complexity compressors for the rate-distortion-perception setting, where an additional perception constraint ensures the source and reconstruction distributions are close in terms of a statistical divergence. These compressors combine lattice coding with the use of shared randomness via dithering over the lattice cells, and provably achieve the fundamental rate-distortion-perception limits on the Gaussian source.
Advisor
Hassani, Hamed