Animating Synthetic Dyadic Conversations With Variations Based on Context and Agent Attributes
Penn collection
Degree type
Discipline
Subject
agent attributes
smart event
behavior trees
crowd simulation
Computer Sciences
Engineering
Graphics and Human Computer Interfaces
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
Conversations between two people are ubiquitous in many inhabited contexts. The kinds of conversations that occur depend on several factors, including the time, the location of the participating agents, the spatial relationship between the agents, and the type of conversation in which they are engaged. The statistical distribution of dyadic conversations among a population of agents will therefore depend on these factors. In addition, the conversation types, flow, and duration will depend on agent attributes such as interpersonal relationships, emotional state, personal priorities, and socio-cultural proxemics. We present a framework for distributing conversations among virtual embodied agents in a real-time simulation. To avoid generating actual language dialogues, we express variations in the conversational flow by using behavior trees implementing a set of conversation archetypes. The flow of these behavior trees depends in part on the agents’ attributes and progresses based on parametrically estimated transitional probabilities. With the participating agents’ state, a ‘smart event’ model steers the interchange to different possible outcomes as it executes. Example behavior trees are developed for two conversation archetypes: buyer–seller negotiations and simple asking–answering; the model can be readily extended to others. Because the conversation archetype is known to participating agents, they can animate their gestures appropriate to their conversational state. The resulting animated conversations demonstrate reasonable variety and variability within the environmental context. Copyright © 2012 John Wiley & Sons, Ltd.