Date of this Version
This paper presents a model for automatically producing prosodically appropriate speech and corresponding facial expression for agents that respond to simple database queries in a 3D graphical representation of the world. This work addresses two major issues in human-machine interaction. First, proper intonation is necessary for conveying information structure, including important distinctions of contrast and focus. Second, facial expressions and lip movements often provide additional information about discourse structure, turn-taking protocols and speaker attitudes.
Prevost, S., & Pelachaud, C. (1994). Sight and Sound: Generating Facial Expressions and Spoken Intonation from Context. Retrieved from https://repository.upenn.edu/hms/40
Date Posted: 18 July 2007
This document has been peer reviewed.