Sound Localization and Multi-Modal Steering for Autonomous Virtual Agents

Loading...
Thumbnail Image
Penn collection
Center for Human Modeling and Simulation
Degree type
Discipline
Subject
virtual agents
artificial life
acoustics
localization
steering
Computer Sciences
Engineering
Graphics and Human Computer Interfaces
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract

With the increasing realism of interactive applications, there is a growing need for harnessing additional sensory modalities such as hearing. While the synthesis and propagation of sounds in virtual environments has been explored, there has been little work that addresses sound localization and its integration into behaviors for autonomous virtual agents. This paper develops a framework that enables autonomous virtual agents to localize sounds in dynamic virtual environments, subject to distortion effects due to attenuation, reflection and diffraction from obstacles, as well as interference between multiple audio signals. We additionally integrate hearing into standard predictive collision avoidance techniques and couple it with vision to allow agents to react to what they see and hear, while navigating in virtual environments.

Advisor
Date of presentation
2014-01-01
Conference name
Center for Human Modeling and Simulation
Conference dates
2023-05-17T12:41:22.000
Conference location
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
I3D 2014 was held March 14-16, 2014, in San Francisco, California, USA.
Recommended citation
Collection