Center for Human Modeling and Simulation

Document Type

Conference Paper

Date of this Version

June 1991

Comments

Copyright 1991 IEEE. Reprinted from Computer Animation '91, pages 15-30.

This material is posted here with permission of the IEEE. Such permissions of the IEEE does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of they copyright laws protecting it.

Abstract

Our goal is to build a system of 3D animation and facial expressions of emotion correlated with the intonation of the voice. Up till now, the existing systems did not take into account the link between these two features. We will look at the rules that control these relations (intonation/emotions and facial expressions/emotions) as well as the coordination of these various modes and expressions. Given an utterance, we consider how the messages (what is new/old information in the given context) transmitted through the choice of accents and their placement, are conveyed through the face. The facial model integrates the action of each muscle or groups of muscles as well as the propagation of the muscles' movement. Our first step will be to enumerate and to differentiate facial movements linked to emotions as opposed to those linked to conversation. Then, we will examine what the rules are that drive them and how their different functions interact.

Keywords

facial animation, emotion, intonation, coarticulation, conversational signals

Share

COinS
 

Date Posted: 31 July 2007

This document has been peer reviewed.