Technical Reports (CIS)
Document Type
Technical Report
Date of this Version
5-1-1994
Abstract
We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gesture generators. Coordinated arm, wrist, and hand motions are invoked to create semantically meaningful gestures. Throughout, we will use examples from an actual synthesized, fully animated conversation.
Recommended Citation
Justine Cassell, Catherine Pelachaud, Norman I. Badler, Mark Steedman, Brett Achorn, Tripp Becket, Brett Douville, Scott Prevost, and Matthew Stone, "ANIMATED CONVERSATION: Rule-Based Generation of Facial Expression, Gesture & Spoken Intonation for Multiple Conversational Agents", . May 1994.
Date Posted: 30 July 2007
Comments
University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-94-26.