Where to Look? Automating Certain Visual Attending Behaviors of Human Characters
Files
Degree type
Graduate group
Discipline
Subject
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
This thesis proposes a computational framework for generating visual attending behavior in an embodied simulated human agent. Such behaviors directly control eye and head motions, and guide other actions such as locomotion and reach. The implementation of these concepts, referred to as the AVA, draws on empirical and qualitative observations known from psychology, human factors and computer vision. Deliberate behaviors, the analogs of scanpaths in visual psychology, compete with involuntary attention capture and lapses into idling or free viewing. For effciency, the embodied agent is assumed to have access to certain properties of the 3D world (scene graph) stored in the graphical environment. When information about a task is known, the scene graph is queried. When an agent lapses into free viewing or idling, no task constraints are active so a simplified image analysis technique is employed to select potential directions of interest. Insights provided by implementing this framework are: a defined set of parameters that impact the observable effects of attention, a defined vocabulary of looking behaviors for certain motor and cognitive activity, a defined hierarchy of three levels of eye behavior (endogenous, exogenous and idling)and a proposed method of how these types interact, a technique of modifying motor activity based on visual inputs, and a technique that allows for anticipation and interleaving of eye behaviors for sequential motor actions. AVA generated behavior is emergent and responds to environment context and dynamics. Further, this method animates behavior at interactive rates. Experiments supporting several combinations of environment and attending conditions are demonstrated, followed by a discussion of an evaluation of AVA effectiveness.