Center for Human Modeling and Simulation

Document Type

Conference Paper

Date of this Version

October 1998

Comments

Copyright 1998 IEEE. Reprinted from Sixth Pacific Conference on Computer Graphics and Applications, 1998, pages 161-168. Publisher URL: http://dx.doi.org/10.1109/PCCGA.1998.732100

This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.

Abstract

Gesture and speech are two very important behaviors for virtual humans. They are not isolated from each other but generally employed simultaneously in the service of the same intention. An underlying PaT-Net parallel finite-state machine may be used to coordinate them both. Gesture selection is not arbitrary. Typical movements correlated with specific textual elements are used to select and produce gesticulation online. This enhances the expressiveness of speaking virtual humans.

Keywords

virtual human, agent, avatar, gesture, posture, PaT-Nets

Share

COinS
 

Date Posted: 12 July 2007

This document has been peer reviewed.