Huang, Pengfei

Email Address
ORCID
Disciplines
Research Projects
Organizational Units
Position
Introduction
Research Interests

Search Results

Now showing 1 - 6 of 6
  • Publication
    Real-Time Evacuation Simulation in Mine Interior Model of Smoke and Action
    (2010-01-01) Huang, Pengfei; Kider, Joseph T; Sunshine-Hill, Ben; McCaffrey, Jonathan B.; Rios, Desiree Velazquez; Badler, Norman I; Kang, Jinsheng
    Virtual human crowd models have been used in the simulation of building and urban evacuation, but have not yet applied to underground coal mine operations and escape situations with emphasis on smoke, fires and physiological behaviors. We explore this through a real-time simulation model, MIMOSA (Mine Interior Model Of Smoke and Action), which integrates an underground coal mine virtual environment, a fire and smoke propagation model, and a human physiology and behavior model. Each individual agent has a set of physiological parameters as variables of time and environment, simulating a miner’s physiological condition during normal operations as well as during emergencies due to fire and smoke. To obtain appropriate agent navigation in the mine environment, we have extended the HiDAC framework (High- Density Autonomous Crowds) navigation from a grid-based cell-portal graph to a geometrybased portal path and integrated a novel cellportal and shortest path visibility algorithm.
  • Publication
    Smart Events and Primed Agents
    (2010-01-01) Stocker, Catherine; Huang, Pengfei; Badler, Norman I
    We describe a new organization for virtual human responses to dynamically occurring events. In our approach behavioral responses are enumerated in the representation of the event itself. These Smart Events inform an agent of plausible actions to undertake. We additionally introduce the notion of agent priming, which is based on psychological concepts and further restricts and simplifies action choice. Priming facilitates multi-dimensional agents and in combination with Smart Events results in reasonable, contextual action selection without requiring complex reasoning engines or decision trees. This scheme burdens events with possible behavioral outcomes, reducing agent computation to evaluation of a case expression and (possibly) a probabilistic choice. We demonstrate this approach in a small group scenario of agents reacting to a fire emergency.
  • Publication
    Real-Time Evacuation Simulation in Mine Interior Model of Smoke and Action
    (2010-05-31) Huang, Pengfei; Kider, Joseph T.; Sunshine-Hill, Ben; McCaffrey, Jonathan B.; Rios, Desiree Velazquez; Badler, Norman I
    Virtual human crowd models have been used in the simulation of building and urban evacuation, but have not yet applied to underground coal mine operations and escape situations with emphasis on smoke, fires and physiological behaviors. We explore this through a real-time simulation model, MIMOSA (Mine Interior Model Of Smoke and Action), which integrates an underground coal mine virtual environment, a fire and smoke propagation model, and a human physiology and behavior model. Each individual agent has a set of physiological parameters as variables of time and environment, simulating a miner’s physiological condition during normal operations as well as during emergencies due to fire and smoke. To obtain appropriate agent navigation in the mine environment, we have extended the HiDAC framework (High- Density Autonomous Crowds) navigation from a grid-based cell-portal graph to a geometrybased portal path and integrated a novel cellportal and shortest path visibility algorithm.
  • Publication
    Animating Synthetic Dyadic Conversations With Variations Based on Context and Agent Attributes
    (2012-02-01) Shoulson, Alexander; Huang, Pengfei; Sun, Libo; Nenkova, Ani; Badler, Norman I; Nelson, Nicole; Qin, Wenhu
    Conversations between two people are ubiquitous in many inhabited contexts. The kinds of conversations that occur depend on several factors, including the time, the location of the participating agents, the spatial relationship between the agents, and the type of conversation in which they are engaged. The statistical distribution of dyadic conversations among a population of agents will therefore depend on these factors. In addition, the conversation types, flow, and duration will depend on agent attributes such as interpersonal relationships, emotional state, personal priorities, and socio-cultural proxemics. We present a framework for distributing conversations among virtual embodied agents in a real-time simulation. To avoid generating actual language dialogues, we express variations in the conversational flow by using behavior trees implementing a set of conversation archetypes. The flow of these behavior trees depends in part on the agents’ attributes and progresses based on parametrically estimated transitional probabilities. With the participating agents’ state, a ‘smart event’ model steers the interchange to different possible outcomes as it executes. Example behavior trees are developed for two conversation archetypes: buyer–seller negotiations and simple asking–answering; the model can be readily extended to others. Because the conversation archetype is known to participating agents, they can animate their gestures appropriate to their conversational state. The resulting animated conversations demonstrate reasonable variety and variability within the environmental context. Copyright © 2012 John Wiley & Sons, Ltd.
  • Publication
    Sound Localization and Multi-Modal Steering for Autonomous Virtual Agents
    (2014-01-01) Huang, Pengfei; Wang, Yu; Kavan, Ladislav; Kapadia, Mubbasir; Badler, Norman I
    With the increasing realism of interactive applications, there is a growing need for harnessing additional sensory modalities such as hearing. While the synthesis and propagation of sounds in virtual environments has been explored, there has been little work that addresses sound localization and its integration into behaviors for autonomous virtual agents. This paper develops a framework that enables autonomous virtual agents to localize sounds in dynamic virtual environments, subject to distortion effects due to attenuation, reflection and diffraction from obstacles, as well as interference between multiple audio signals. We additionally integrate hearing into standard predictive collision avoidance techniques and couple it with vision to allow agents to react to what they see and hear, while navigating in virtual environments.
  • Publication
    SPREAD: Sound Propagation and Perception for Autonomous Agents in Dynamic Environments
    (2013-01-01) Huang, Pengfei; Kapadia, Mubbasir; Badler, Norman I
    The perception of sensory information and its impact on behavior is a fundamental component of being human. While visual perception is considered for navigation, collision, and behavior selection, the acoustic domain is relatively unexplored. Recent work in acoustics focuses on synthesizing sound in 3D environments; however, the perception of acoustic signals by a virtual agent is a useful and realistic adjunct to any behavior selection mechanism. In this paper, we present SPREAD, a novel agent-based sound perception model using a discretized sound packet representation with acoustic features including amplitude, frequency range, and duration. SPREAD simulates how sound packets are propagated, attenuated, and degraded as they traverse the virtual environment. Agents perceive and classify the sounds based on the locally-received packet set using a hierarchical clustering scheme, and have individualized hearing and understanding of their surroundings. Using this model, we demonstrate several simulations that greatly enrich controls and outcomes.