Where To Look? Automating Some Visual Attending Behaviors of Human Characters

Loading...
Thumbnail Image
Penn collection
IRCS Technical Reports Series
Degree type
Discipline
Subject
Funder
Grant number
License
Copyright date
Distributor
Related resources
Contributor
Abstract

We propose a method for automatically generating the appropriate attentional (eye gaze or looking) behavior for virtual characters existing or performing tasks in a dynamically changing environment. Such behavior is expected of human-like characters but is usually tedious to animate and often not specified at all as part of the character's explicit actions. In our system, referred to as the AVA (Automated Visual Attending), users enter a list of motor or cognitive actions as input in text format: (walk to the lamp post, monitor the traffic light, reach for the box, etc). The system generates the appropriate motions and automatically generates the corresponding attentional behavior. The resulting gaze behavior is produced not only by considering the explicit queue of required tasks, but also by factoring in involuntary visual functions known from human cognitive behavior (attentional capture by exogenous factors, spontaneous looking), the environment being viewed, task interactions, and task load. This method can be adapted to eye and head movement control for any facial model.

Advisor
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Publication date
1998-06-01
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-98-17. (Dissertation Proposal)
Recommended citation
Collection