Joshi, Aravind K
Email Address
ORCID
Disciplines
21 results
Search Results
Now showing 1 - 10 of 21
Publication Parsing With Lexicalized Tree Adjoining Grammar(1990-02-01) Schabes, Yves; Joshi, Aravind KMost current linguistic theories give lexical accounts of several phenomena that used to be considered purely syntactic. The information put in the lexicon is thereby increased in both amount and complexity: see, for example, lexical rules in LFG (Kaplan and Bresnan, 1983), GPSG (Gazdar, Klein, Pullum and Sag, 1985), HPSG (Pollard and Sag, 1987), Combinatory Categorial Grammars (Steedman, 1987), Karttunen's version of Categorial Grammar (Karttunen 1986, 1988), some versions of GB theory (Chomsky 1981), and Lexicon-Grammars (Gross 1984). We would like to take into account this fact while defining a formalism. We therefore explore the view that syntactical rules are not separated from lexical items. We say that a grammar is lexicalized (Schabes, AbeilK and Joshi, 1988) if it consists of: (1) a finite set of structures each associated with lexical items; each lexical item will be called the anchor of the corresponding structure; the structures define the domain of locality over which constraints are specified; (2) an operation or operations for composing the structures. The notion of anchor is closely related to the word associated with a functor-argument category in Categorial Grammars. Categorial Grammar (as used for example by Steedman, 1987) are 'lexicalized' according to our definition since each basic category has a lexical item associated with it.Publication Characterizing Structural Descriptions Produced by Various Grammatical Formalisms(1988-09-01) Vijay-Shanker, K.; Weir, David; Joshi, Aravind KWe consider the structural descriptions produced by various grammatical formalisms in terms of the complexity of the paths and the relationship between paths in the sets of structural descriptions that each system can generate. In considering the relationships between formalisms, we show that it is useful to abstract away from the details of the formalism, and examine the nature of their derivation process as reflected by properties of their derivation trees. We find that several of the formalisms considered can be seen as being closely re1aled since they have derivation tree sets with the same structure as those produced by Context-Free Grammars. On the basis of this observation, we describe a class of formalisms which we call Linear Context Free Rewriting Systems, and show they are recognizable in polynomial time and generate only semilinear languages.Publication Feature Structures Based Tree Adjoining Grammars(1988-10-01) Joshi, Aravind K; Shanker, K. VijayWe have embedded Tree Adjoining Grammars (TAG) in a feature structure based unification system. The resulting system, Feature Structure based Tree Adjoining Grammars (FTAG), captures the principle of factoring dependencies and recursion, fundamental to TAG's. We show that FTAG has an enhanced descriptive capacity compared to TAG formalism. We consider some restricted versions of this system and some possible linguistic stipulations that can be made. We briefly describe a calculus to represent the structures used by this system, extending on the work of Rounds, and Kasper [Rounds et al. 1986, Kasper et al. 1986)involving the logical formulation of feature structures.Publication Centering: A Framework for Modelling the Local Coherence of Discourse(1995) Joshi, Aravind K.; Grosz, Barbara J.; Weinstein, ScottThis paper concerns relationships among focus of attention, choice of referring expression, and perceived coherence of utterances within a discourse segment. It presents a framework and initial theory of centering which are intended to model the local component of attentional state. The paper examines interactions between local coherence and choice of referring expressions; it argues that differences in coherence correspond in part to the inference demands made by different types of referring expressions given a particular attentional state. It demonstrates that the attentional state properties modelled by centering can account for these differences.Publication The Linguistic Relevance of Tree Adjoining Grammar(1985-04-01) Kroch, Anthony S; Joshi, Aravind KIn this paper we apply a new notation for the writing of natural language grammars to some classical problems in the description of English. The formalism is the Tree Adjoining Grammar (TAG) of Joshi, Levy and Takahashi 1975, which was studied, initially only for its mathematical properties but which now turns out to be a interesting candidate for the proper notation of meta-grammar; that is for the universal grammar of contemporary linguistics. Interest in the application of the TAG formalism to the writing of natural language grammars arises out of recent work on the possibility of writing grammars for natural languages in a metatheory of restricted generative capacity (for example, Gazdar 1982 and Gazdar et al. 1985). There have been also several recent attempts to examine the linguistic metatheory of restricted grammatical formalisms, in particular, context-free grammars. The inadequacies of context-free grammars have been discussed both from the point of view of strong generative capacity (Bresnan et al. 1982) and weak generative capacity (Shieber 1984, Postal and Langendoen 1984, Higginbothem 1984, the empirical claims of the last two having been disputed by Pullum (Pullum 1984)). At this point TAG grammar becomes interesting because while it is more powerful than context-free grammar, it is only "mildly" so. This extra power of TAG is a direct corollary of the way TAG factors recursion and dependencies, and it can provide reasonable structural descriptions for constructions like Dutch verb raising where context-free grammar apparently fails. These properties of TAG and some of its mathematical properties were discussed by Joshi 1983.Publication A Default Temporal Logic for Regulatory Conformance Checking(2008-04-05) Dinesh, Nikhil; Joshi, Aravind K; Lee, Insup; Sokolsky, OlegThis paper considers the problem of checking whether an organization conforms to a body of regulation. Conformance is cast as a trace checking question – the regulation is represented in a logic that is evaluated against an abstract trace or run representing the operations of an organization. We focus on a problem in designing a logic to represent regulation. A common phenomenon in regulatory texts is for sentences to refer to others for conditions or exceptions. We motivate the need for a formal representation of regulation to accommodate such references between statements. We then extend linear temporal logic to allow statements to refer to others. The semantics of the resulting logic is defined via a combination of techniques from Reiter’s default logic and Kripke’s theory of truth. This paper is an expanded version of [1].Publication A Processing Model for Free Word Order Languages(1995-04-01) Rambow, Owen; Joshi, Aravind KLike many verb-final languages, German displays considerable word-order freedom: there is no syntactic constraint on the ordering of the nominal arguments of a verb, as long as the verb remains in final position. This effect is referred to as “scrambling”, and is interpreted in transformational frameworks as leftward movement of the arguments. Furthermore, arguments from an embedded clause may move out of their clause; this effect is referred to as “long-distance scrambling”. While scrambling has recently received considerable attention in the syntactic literature, the status of long-distance scrambling has only rarely been addressed. The reason for this is the problematic status of the data: not only is long-distance scrambling highly dependent on pragmatic context, it also is strongly subject to degradation due to processing constraints. As in the case of center-embedding, it is not immediately clear whether to assume that observed unacceptability of highly complex sentences is due to grammatical restrictions, or whether we should assume that the competence grammar does not place any restrictions on scrambling (and that, therefore, all such sentences are in fact grammatical), and the unacceptability of some (or most) of the grammatically possible word orders is due to processing limitations. In this paper, we will argue for the second view by presenting a processing model for German.Publication Processing Crossed and Nested Dependencies: An Automaton Perspective on the Psycholinguistic Results(1989-09-01) Joshi, Aravind KThe clause-final verbal clusters in Dutch and German (and in general, in West Germanic languages) have been studied extensively in different syntactic theories. Standard Dutch prefers crossed dependencies (between verbs and their arguments) while Standard German prefers nested dependencies. Recently Bach, Brown, and Marslen-Wilson (1986) have investigated the consequences of these differences between Dutch and German for the processing complexity of sentences, containing either crossed or nested dependencies. Stated very simply, their results show that Dutch is 'easier' than German, thus showing that the push-down automaton (PDA) cannot be the universal basis for the human parsing mechanism. They provide an explanation for the inadequacy of PDA in terms of the kinds of partial interpretations the dependencies allow the listener to construct. Motivated by their results and their discussion of these results we introduce a principle of partial interpretation (PPI) and present an automaton, embedded push-down automaton (EPDA), which permits processing of crossed and nested dependencies consistent with PPI. We show that there are appropriate complexity measures (motivated by the discussion in Bach, Brown, and Marslen-Wilson (1986)) according to which the processing of crossed dependencies is easier than the processing of nested dependencies. We also discuss a case of mixed dependencies. This EPDA characterization of the processing of crossed and nested dependencies is significant because EPDAs are known to be exactly equivalent to Tree Adjoining Grammars (TAG), which are also capable of providing a linguistically motivated analysis for the crossed dependencies of Dutch (Kroch and Santorini 1988). This significance is further enhanced by the fact that two other grammatical formalisms, (Head Grammars (Pollard, 1984) and Combinatory Grammars (Steedman, 1987)), also capable of providing analysis for crossed dependencies of Dutch, have been shown recently to be equivalent to TAGS in their generative power. We have also discussed briefly some issues concerning the EPDAs and their associated grammars, and the relationship between these associated grammars and the corresponding 'linguistic' grammars.Publication Unification-Based Tree Adjoining Grammars(1991-03-01) Vijay-Shanker, K.; Joshi, Aravind KMany current grammar formalisms used in computational linguistics take a unification-based approach that use structures (called feature structures) containing sets of feature-value pairs. In this paper, we describe a unification-based approach to Tree Adjoining Grammars (TAG). The resulting formalism (UTAG) retains the principle of factoring dependencies and recursion that is fundamental to TAGs. We also extend the definition of UTAG to include the lexicalized approach to TAGs (see [Schabes et al., 1988]). We give some linguistic examples using UTAG and informally discuss the descriptive capacity of UTAG, comparing it with other unificationbased formalisms. Finally, based on the linguistic theory underlying TAGs, we propose some stipulations that can be placed on UTAG grammars. In particular, we stipulate that the feature structures associated with the nodes in an elementary tree are bounded ( there is an analogous stipulation in GPSG). Grammars that satisfy these stipulations are equivalent to TAG. Thus, even with these stipulations, UTAGs have more power than CFG-based unification grammars with the same stipulations.Publication Parsing Strategies With 'Lexicalized' Grammars: Application to Tree Adjoining Grammars(1988-08-01) Schabes, Yves; Abeillé, Anne; Joshi, Aravind KIn this paper, we present a parsing strategy that arose from the development of an Earley-type parsing algorithm for TAGs (Schabes and Joshi 1988) and from some recent linguistic work in TAGs (Abeillé: 1988a). In our approach, each elementary structure is systematically associated with a lexical head. These structures specify extended domains of locality (as compared to a context-free grammar) over which constraints can be stated. These constraints either hold within the elementary structure itself or specify what other structures can be composed with a given elementary structure. The 'grammar' consists of a lexicon where each lexical item is associated with a finite number of structures for which that item is the head. There are no separate grammar rules. There are, of course, 'rules' which tell us how these structures are composed. A grammar of this form will be said to be 'lexicalized'. We show that in general context-free grammars cannot be 'lexicalized'. We then show how a 'lexicalized' grammar naturally follows from the extended domain of locality of TAGs and examine briefly some of the linguistic implications of our approach. A general parsing strategy for 'lexicalized' grammars is discussed. In the first stage, the parser selects a set of elementary structures associated with the lexical items in the input sentence, and in the second stage the sentence is parsed with respect to this set. The strategy is independent of nature of the elementary structures in the underlying grammar. However, we focus our attention on TAGs. Since the set of trees selected at the end of the first stage is not infinite, the parser can use in principle any search strategy. Thus, in particular, a top-down strategy can be used since problems due to recursive structures are eliminated. We then explain how the Earley-type parser for TAGs can be modified to take advantage of this approach.
- «
- 1 (current)
- 2
- 3
- »