Understanding Natural Language Instructions: A Computational Approach to Purpose Clauses

Loading...
Thumbnail Image
Degree type
Graduate group
Discipline
Subject
Funder
Grant number
License
Copyright date
Distributor
Related resources
Author
Di Eugenio, Barbara
Contributor
Abstract

Human agents are extremely flexible in dealing with Natural Language instructions. I argue that most instructions don't exactly mirror the agent's knowledge, but are understood by accommodating them in the context of the general plan the agent is considering: the accommodation process is guided by the goal(s) that the agent is trying to achieve. Therefore a NL system which interprets instructions must be able to recognize and/or hypothesize goals; it must make use of a flexible knowledge representation system, able to support the specialized inferences necessary to deal with input action descriptions that do not exactly match the stored knowledge. The data that support my claim are Purpose Clauses (PCs), infinitival constructions as in Do α to do β, and Negative Imperatives. I present a pragmatic analysis of both PCs and Negative Imperatives. Furthermore, I analyze the computational consequences of PCs, in terms of the relations between actions PCs express, and of the inferences an agent has to perform to understand PCs. I propose an action representation formalism that provides the required flexibility. It has two components. The Terminological Box (TBox) encodes linguistic knowledge about actions, and is expressed by means of the hybrid system CLASSIC [Brachman et al., 1991]. To guarantee that the primitives of the representation are linguistically motivated, I derive them from Jackendoff's work on Conceptual Structures [1983; 1990]. The Action Library encodes planning knowledge about actions. The action terms used in the plans are those defined in the TBox. Finally, I present an algorithm that implements inferences necessary to understand Do α to do β, and supported by the formalism I propose. In particular, I show how the TBox classifier is used to infer whether α can be assumed to match one of the substeps in the plan for β, and how expectations necessary for the match to hold are computed.

Advisor
Date of degree
1993-12-01
Date Range for Data Collection (Start Date)
Date Range for Data Collection (End Date)
Digital Object Identifier
Series name and number
Volume number
Issue number
Publisher
Publisher DOI
Journal Issue
Comments
University of Pennsylvania Institute for Research in Cognitive Science Technical Report No. IRCS-93-52.
Recommended citation