Center for Human Modeling and Simulation
Document Type
Conference Paper
Date of this Version
2014
Publication Source
Symposium on Interactive 3D Graphics and Games (I3D '14 )
Start Page
23
Last Page
30
DOI
10.1145/2556700.2556718
Abstract
With the increasing realism of interactive applications, there is a growing need for harnessing additional sensory modalities such as hearing. While the synthesis and propagation of sounds in virtual environments has been explored, there has been little work that addresses sound localization and its integration into behaviors for autonomous virtual agents. This paper develops a framework that enables autonomous virtual agents to localize sounds in dynamic virtual environments, subject to distortion effects due to attenuation, reflection and diffraction from obstacles, as well as interference between multiple audio signals. We additionally integrate hearing into standard predictive collision avoidance techniques and couple it with vision to allow agents to react to what they see and hear, while navigating in virtual environments.
Copyright/Permission Statement
© Wang et al. | ACM 2014. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Symposium on Interactive 3D Graphics and Games (I3D '14), http://dx.doi.org/10.1145/2556700.2556718.
Keywords
virtual agents, artificial life, acoustics, localization, steering
Recommended Citation
Wang, Y., Kapadia, M., Huang, P., Kavan, L., & Badler, N. I. (2014). Sound Localization and Multi-Modal Steering for Autonomous Virtual Agents. Symposium on Interactive 3D Graphics and Games (I3D '14 ), 23-30. http://dx.doi.org/10.1145/2556700.2556718
Date Posted: 13 January 2016
Comments
I3D 2014 was held March 14-16, 2014, in San Francisco, California, USA.