Human Task Animation from Performance Models and Natural Language Input

Loading...
Thumbnail Image
Penn collection
Center for Human Modeling and Simulation
Degree type
Discipline
Subject
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Esakov, Jeffrey
Jung, Moon Ryul
Contributor
Abstract

Graphical manipulation of human figures is essential for certain types of human factors analyses such as reach, clearance, fit, and view. In many situations, however, the animation of simulated people performing various tasks may be based on more complicated functions involving multiple simultaneous reaches, critical timing, resource availability, and human performance capabilities. One rather effective means for creating such a simulation is through a natural language description of the tasks to be carried out. Given an anthropometrically-sized figure and a geometric workplace environment, various simple actions such as reach, turn, and view can be effectively controlled from language commands or standard NASA checklist procedures. The commands may also be generated by external simulation tools. Task timing is determined from actual performance models, if available, such as strength models or Fitts' Law. The resulting action specifications are animated on a Silicon Graphics Iris workstation in real-time.

Advisor
Date of presentation
1989-04-01
Conference name
Center for Human Modeling and Simulation
Conference dates
2023-05-17T01:21:46.000
Conference location
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
Printed with permission. Presented at Graphics Technology in Space Applications, NASA JSC Conference Publication 3045, April 1989.
Recommended citation
Collection