Building Parameterized Action Representations From Observation
Degree type
Graduate group
Discipline
Subject
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
Virtual worlds may be inhabited by intelligent agents who interact by performing various simple and complex actions. If the agents are human-like (embodied), their actions may be generated from motion capture or procedural animation. In this thesis, we introduce the CaPAR interactive system which combines both these approaches to generate agent-size neutral representations of actions within a framework called Parameterized Action Representation (PAR). Just as a person may learn a new complex physical task by observing another person doing it, our system observes a single trial of a human performing some complex task that involves interaction with self or other objects in the environment and automatically generates semantically rich information about the action. This information can be used to generate similar constrained motions for agents of different sizes. Human movement is captured by electromagnetic sensors. By computing motion zerocrossings and geometric spatial proximities, we isolate significant events, abstract both spatial and visual constraints from an agent's action, and segment a given complex action into several simpler subactions. We analyze each independently and build individual PARs for them. Several PARs can be combined into one complex PAR representing the original activity. Within each motion segment, semantic and style information is extracted. The style information is used to generate the same constrained motion in other differently sized virtual agents by copying the end-effector velocity profile, by following a similar end-effector trajectory, or by scaling and mapping force interactions between the agent and an object. The semantic information is stored in a PAR. The extracted style and constraint information is stored in the corresponding agent and object models.