Simulating Human Tasks Using Simple Natural Language Instructions
Penn collection
Degree type
Discipline
Subject
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Contributor
Abstract
We report a simple natural language interface to a human task simulation system that graphically displays the performance of goal-directed tasks by an agent in a workspace. The inputs to the system are simple natural language commands requiring achievement of spatial relationships among objects in the workspace. To animate the behaviors denoted by instructions, a semantics of action verbs and locative expressions is devised in terms of physically based components, in particular geometric or spatial relations among the relevant objects. To generate human body motions to achieve such geometric goals, motion strategies and a planner that used them are devised. The basic idea for the motion strategies is to use commonsensical geometric relationships to determine appropriate body motions. Motion strategies for a given goal specify possibly overlapping subgoals of the relevant body parts in such a way achieving the subgoals makes the goals achieved without collision with objects in the workspace. A motion plan generated using the motion strategies is basically a chart of temporally overlapping goal conditions of the relevant body parts. This motion plan is animated by sending it to a motion human controller, which incrementally finds joint angles of the agent's body that satisfy the goal conditions in the motion plan, and display the body's configurations determined by the joint angles.