Facial animation system with realistic eye movement based on a cognitive model for virtual agents
For an animate human face model to appear natural it should demonstrate eye movements consistent with human ocular performance. During face-to-face conversational interactions, eyes exhibit conversational turn-taking and agent thought processes through gaze direction, saccade, and scan patterns. We have implemented an eye movement model based on empirical models of eye saccades and statistical models of eye tracking data. First, we analyze a sequence of eye-tracking images in order to extract the spatio-temporal trajectory of the eye movement. The eye-tracking video is further segmented and classified into three modes: talking, listening, and thinking modes, so that we can construct an eye saccade model for each of the three modes. The models reflect the dynamic characteristics of natural eye movement, which include saccade magnitude, direction, duration, velocity, and inter-saccadic interval. Based on the model, we synthesized a face character, which has more natural looking and believable eye movement. Experiments using models with stationary eyes, eyes with random saccades only, and eyes with saccades and face scanning patterns are evaluated for naturalness and communication effectiveness.
Lee, Sooha Park, "Facial animation system with realistic eye movement based on a cognitive model for virtual agents" (2002). Dissertations available from ProQuest. AAI3073025.