A real-time sound packet propagation and perception model for agents in dynamic environments

Pengfei Huang, University of Pennsylvania

Abstract

Virtual human simulations often consider visual perception during the computation of navigation, collision, and behavior selection. The acoustic domain, however, is relatively unexplored. We explore how the simulation realism of virtual humans and crowds may depend on other modes of communication, specifically the audio channel. Models for sound propagation, localization, and perception are explored in this context. Recent work in acoustics focuses on synthesizing sound in 3D environments; however, the perception of acoustic signals by a virtual agent is a useful and realistic adjunct to any behavior selection mechanism. Previous approaches to signal propagation and statistical models of sound recognition have been too computationally expensive to integrate into real-time autonomous agent simulations. In this dissertation, we present SPREAD, a novel agent-based sound perception model using a discretized sound packet representation with acoustic features including amplitude, pitch range, and duration. SPREAD simulates how sound features are propagated, attenuated, spread and degraded as they traverse the virtual environment. We also consider how packets at the same location combine and interfere with each other. Agents perceive and classify the sounds based on this locally-received sound packet set using a hierarchical clustering scheme, and have individualized hearing and understanding of their surroundings. Using this model, we demonstrate several simulations that greatly enrich controls and outcomes.

Subject Area

Computer science

Recommended Citation

Huang, Pengfei, "A real-time sound packet propagation and perception model for agents in dynamic environments" (2015). Dissertations available from ProQuest. AAI3721261.
https://repository.upenn.edu/dissertations/AAI3721261

Share

COinS