Communication and coarticulation in facial animation
Our goal is to produce a high level programming language or tool for 3D animation of facial expressions, especially, those conveying information correlated with the intonation of the voice: this includes the differences of timing, pitch, and emphasis that are related to such semantic distinctions of discourse as "given" and "new" information, some of which are also correlated with affect or emotion. Up till now, systems have not embodied such rule-governed translation from speech and utterance meaning to facial expressions. Our algorithm embodies rules that describe and coordinate these relations (intonation/information, intonation/emotions and facial expression/emotions). Given an utterance, we consider how the discourse information (what is new/old information in the given context, or what is the "topic" of the discourse) is transmitted through the choice of accents and their placement, how it is conveyed over facial expression and how the two are coordinated. The facial model integrates the action at several levels, including individual muscle, group of muscles, and eye- and head-motion, as well as the propagation of or interaction of these movements, especially coarticulation effects. This study offers a higher level of representation of facial actions by grouping them into specialized functions (lip shapes for phonemes, eyebrow and head motions as emphatic movements). The major "key phrases" of this work involves the integration of FACS (facial notational system derived by P. Ekman and W. Friesen), and the Action Units (muscle actions); it offers a solution to lip synchronization as well as it provides a repertory of the different types of facial expressions involved with speech; it considers speaker/listener interaction. This representation is used to drive an animation system linked to facial motion.
Pelachaud, Catherine Rose Emma, "Communication and coarticulation in facial animation" (1991). Dissertations available from ProQuest. AAI9211983.