Center for Human Modeling and Simulation

Document Type

Journal Article

Date of this Version

7-5-1996

Publication Source

Lecture Notes in Computer Science

Volume

1374

Start Page

68

Last Page

88

DOI

10.1007/BFb0052313

Abstract

We describe an implemented system which automatically generates and animates conversations between multiple human-like agents with appropriate and synchronized speech, intonation, facial expressions, and hand gestures. Conversations are created by a dialogue planner that produces the text as well as the intonation of the utterances. The speaker/listener relationship, the text, and the intonation in turn drive facial expressions, lip motions, eye gaze, head motion, and arm gesture generators.

Copyright/Permission Statement

The final publication is available at Springer via http://dx.doi.org/10.1007/BFb0052313

Share

COinS
 

Date Posted: 13 January 2016

This document has been peer reviewed.