Date of this Version
The last few years have seen great maturation in the computation speed and control methods needed to portray 30 virtual humans suitable for real interactive applications. We first describe the state of the art, then focus on the particular approach taken at the University of Pennsylvania with the Jack system. Various aspects of real-time virtual humans are considered, such as appearance and motion, interactive control, autonomous action, gesture, attention, locomotion, and multiple individuals. The underlying architecture consists of a sense-control-act structure that permits reactive behaviors to be locally adaptive to the environment, and a PaT-Net parallel finite-state machine controller that can be used to drive virtual humans through complex tasks. We then argue for a deep connection between language and animation and describe current efforts in linking them through two systems: the Jack Presenter and the JackMOO extension to lambdaM00. Finally, we outline a Parameterized Action Representation for mediating between language instructions and animated actions.
Virtual humans, human modeling, computer animation, virtual reality, autonomous agents, language, and action, computer graphics.
Date Posted: 11 July 2007
This document has been peer reviewed.