An Image-Based Framework For Global Illumination In Animated Environments

Loading...
Thumbnail Image
Degree type
Graduate group
Discipline
Subject
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Nimeroff, Jeffry S.
Contributor
Abstract

Interacting with environments exhibiting the subtle lighting effects found in the real world gives designers a better understanding of the scene’s structure by providing rich visual cues. The major hurdle is that global illumination algorithms are too inefficient to quickly compute their solutions for reasonably sized environments. When motion is allowed within the environment, the problem becomes even more intractable. We address the problem of sampling and reconstructing an environment’s time-varying radiance distribution, its spatio-temporal global illumination information, allowing the efficient generation of arbitrary views of the environment at arbitrary points in time. The radiance distribution formalizes incoming chromatic radiance at all points within a constrained view space, along all directions, at all times. Since these distributions cannot, in general, be calculated analytically, we introduce a framework for specifying and computing sample values from the distribution and progress through a series of sample-based approximations designed to allow easy and accurate reconstruction of images extracted from the distribution. The first approximation is based on storing time-sequences of images at strategic locations within the chosen view space. An image of the environment is constructed by first blending the images contained in the individual time-sequences to get the desired time and then using view interpolation to merge the proximate views. The results presented here demonstrate the feasibility and utility of the method but also show its major drawback. An inability to accurately model the temporal radiance variations using image sequences without resorting to a high sampling rate leads to the replacement of the image sequences by a sparse temporal image volume representation for storing randomly, or adaptively, placed radiance samples. Triangulation techniques are then used to reconstruct the radiance distribution at the desired time from a proximate set of stored spatio-temporal radiance samples. The results presented here show that temporal image volumes allow for more accurate and efficient temporal reconstruction requiring less sampling than the more traditional time-sequence approach.

Advisor
Date of degree
1997-12-01
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-98-03.
Recommended citation