ANIMATED CONVERSATION: Rule-Based Generation of Facial Expression, Gesture & Spoken Intonation for Multiple Conversational Agents
Loading...
Files
Penn collection
Technical Reports (CIS)
Degree type
Discipline
Subject
Computer Sciences
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Cassell, Justine
Pelachaud, Catherine
Steedman, Mark
Achorn, Brett
Becket, Tripp
Douville, Brett
Prevost, Scott
Stone, Matthew
Contributor
Abstract
We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gesture generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout, we will use examples from an actual synthesized, fully animated conversation.
Advisor
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Publication date
1994-05-01
Volume number
Issue number
Publisher
Publisher DOI
Comments
University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-94-26.