Date of this Version
Graphical manipulation of human figures is essential for certain types of human factors analyses such as reach, clearance, fit, and view. In many situations, however, the animation of simulated people performing various tasks may be based on more complicated functions involving multiple simultaneous reaches, critical timing, resource availability, and human performance capabilities. One rather effective means for creating such a simulation is through a natural language description of the tasks to be carried out. Given an anthropometrically-sized figure and a geometric workplace environment, various simple actions such as reach, turn, and view can be effectively controlled from language commands or standard NASA checklist procedures. The commands may also be generated by external simulation tools. Task timing is determined from actual performance models, if available, such as strength models or Fitts' Law. The resulting action specifications are animated on a Silicon Graphics Iris workstation in real-time.
Esakov, J., Badler, N. I., & Jung, M. (1989). Human Task Animation from Performance Models and Natural Language Input. Retrieved from https://repository.upenn.edu/hms/108
Date Posted: 14 September 2007