Center for Human Modeling and Simulation

Document Type

Conference Paper

Date of this Version

December 1991

Comments

Copyright 1991 IEEE. Reprinted from Proceedings of the Winter Simulation Conference, December 1991, pages 1049-1057.
Publisher URL: http://dx.doi.org/10.1109/WSC.1991.185723

This material is posted here with permission of the IEEE. Such permission does not in any way imply IEEE endorsement of any of the University of Pennsylvania's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to pubs-permissions@ieee.org. By choosing to view this document, you agree to all provisions of they copyright laws protecting it.

Abstract

We report a simple natural language interface to a human task simulation system that graphically displays the performance of goal-directed tasks by an agent in a workspace. The inputs to the system are simple natural language commands requiring achievement of spatial relationships among objects in the workspace. To animate the behaviors denoted by instructions, a semantics of action verbs and locative expressions is devised in terms of physically based components, in particular geometric or spatial relations among the relevant objects. To generate human body motions to achieve such geometric goals, motion strategies and a planner that used them are devised. The basic idea for the motion strategies is to use commonsensical geometric relationships to determine appropriate body motions. Motion strategies for a given goal specify possibly overlapping subgoals of the relevant body parts in such a way achieving the subgoals makes the goals achieved without collision with objects in the workspace. A motion plan generated using the motion strategies is basically a chart of temporally overlapping goal conditions of the relevant body parts. This motion plan is animated by sending it to a motion human controller, which incrementally finds joint angles of the agent's body that satisfy the goal conditions in the motion plan, and display the body's configurations determined by the joint angles.

Share

COinS

Date Posted: 26 July 2007