Shalom Lappin - Profile on Academia.edu (original) (raw)

Papers by Shalom Lappin

Research paper thumbnail of How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context

Transactions of the Association for Computational Linguistics

We study the influence of context on sentence acceptability. First we compare the acceptability r... more We study the influence of context on sentence acceptability. First we compare the acceptability ratings of sentences judged in isolation, with a relevant context, and with an irrelevant context. Our results show that context induces a cognitive load for humans, which compresses the distribution of ratings. Moreover, in relevant contexts we observe a discourse coherence effect that uniformly raises acceptability. Next, we test unidirectional and bidirectional language models in their ability to predict acceptability ratings. The bidirectional models show very promising results, with the best model achieving a new state-of-the-art for unsupervised acceptability prediction. The two sets of experiments provide insights into the cognitive aspects of sentence processing and central issues in the computational modeling of text and discourse.

Research paper thumbnail of The Stimulus: Determining the Nature of Primary Linguistic Data

The Stimulus: Determining the Nature of Primary Linguistic Data

Linguistic Nativism and the Poverty of the Stimulus

Research paper thumbnail of Automatic Bare Sluice Disambiguation in Dialogue

The capacity to recognise and interpret sluices|bare wh-phrases that ex- hibit a sentential meani... more The capacity to recognise and interpret sluices|bare wh-phrases that ex- hibit a sentential meaning|is essential to maintaining cohesive interaction between human users and a machine interlocutor in a dialogue system. In this paper we present a machine learning approach to sluice disambigu- ation in dialogue. Our experiments, based on solid theoretical consider- ations, show that applying machine learning techniques using

Research paper thumbnail of Local Constraints vs Local Economy 1

Local Constraints vs Local Economy 1

Research paper thumbnail of Goodman and Katz on synonymy

Goodman and Katz on synonymy

Philosophical Studies, 1976

Research paper thumbnail of Zeramim Hadishim Be-Filosofyah Ma Amarim Be-Filosofyah Uve-Toldoteha

Zeramim Hadishim Be-Filosofyah Ma Amarim Be-Filosofyah Uve-Toldoteha

Research paper thumbnail of The Revolution Confused: A Response To Our Critics

Natural Language and Linguistic Theory, 2000

Research paper thumbnail of Analytical philosophy knowledge

Analytical philosophy knowledge

Philosophia, 1971

Research paper thumbnail of Introduction to the special issue on computational anaphora resolution

Computational Linguistics, Dec 1, 2001

Anaphora accounts for cohesion in texts and is a phenomenon under active study in formal and comp... more Anaphora accounts for cohesion in texts and is a phenomenon under active study in formal and computational linguistics alike. The correct interpretation of anaphora is vital for natural language processing (NLP). For example, anaphora resolution is a key task in natural language interfaces, machine translation, text summarization, information extraction, question answering, and a number of other NLP applications. After considerable initial research, followed by years of relative silence in the early 1980s, anaphora resolution has attracted the attention of many researchers in the last 10 years and a great deal of successful work on the topic has been carried out. Discourseoriented theories and formalisms such as Discourse Representation Theory and Centering Theory inspired new research on the computational treatment of anaphora. The drive toward corpus-based robust NLP solutions further stimulated interest in alternative and/or data-enriched approaches. Last, but not least, application-driven research in areas such as automatic abstracting and information extraction independently highlighted the importance of anaphora and coreference resolution, boosting research in this area. Much of the earlier work in anaphora resolution heavily exploited domain and linguistic knowledge (Sidner 1979; Carter 1987; Rich and LuperFoy 1988; Carbonell and Brown 1988), which was difficult both to represent and to process, and which required considerable human input. However, the pressing need for the development of robust and inexpensive solutions to meet the demands of practical NLP systems encouraged many researchers to move away from extensive domain and linguistic knowledge and to embark instead upon knowledge-poor anaphora resolution strategies. A number of proposals in the 1990s deliberately limited the extent to which they relied on domain and/or linguistic knowledge and reported promising results in knowledge-poor operational environments (

Research paper thumbnail of Statistical Representation of Grammaticality Judgements: the Limits of N-Gram Models

Statistical Representation of Grammaticality Judgements: the Limits of N-Gram Models

Research paper thumbnail of A Critique of the Minimalist Program

Linguistics and Philosophy, 1997

In the past several years Chomsky has published a series of papers in which he proposes what he t... more In the past several years Chomsky has published a series of papers in which he proposes what he terms a minimalist view of syntax. 2 The model of grammar that emerges from this and related work represents a significant departure from the Government Binding (GB) theory of syntax, which dominated the Principles and Parameters (P&P) approach throughout the 1980's. 3 On the GB view, the syntactic well-formedness of a sentence depends upon the satisfaction of constraints that apply at one or more levels of representation in a derivation. The standard GB account recognizes a sequence of three syntactic levels of 1 Much of the research for this paper was done during the summer of 1995, when the alphabetically second author visited the first as a guest academic researcher in the

The feature structures of the lexical heads are specified in the lexicon. Notice that there are no functional heads corresponding to T or light v. Verbs appear in the lexicon with values assigned to their tense, person, and number features. Similarly, the number, person, and gender features of nouns are given in the lexicon. As saw is the head of the VP, its features are inherited by the VP by the Head Feature Principle, which states that the head features of a headed phrase are identical to those of its head daughter. The entire sentence inherits the features of the VP by the same principle. The SUBCAT feature of a head is a list of its complements. The SUBCAT feature for saw specifies a nominative NP subject and an accusative NP object. The feature structure of the complement daughter M ary unifies with that of the second element of saw's SUBCAT list, by the Subcategorization Principle. This results in accusative Case marking on Mary. The VP inherits this partially saturated SUBCAT feature, and the feature structure of the COMP-DTR John unifies with the subject element of the list at this level, yielding a fully saturated SUBCAT feature and nominative case on John. The tense and agreement features of the sentence are inherited from the verb through the VP that it heads. The Case features of the NP arguments are obtained from the SUBCAT feature of the verb through unification. Subject-verb agreement (as well as object-verb agreement) is also enforced through the SUBCAT feature of the verb.

Research paper thumbnail of Expressive Power and Formal Strength

Expressive Power and Formal Strength

Foundations of Intensional Semantics, 2000

Research paper thumbnail of Property Theory with Curry Typing

Property Theory with Curry Typing

Foundations of Intensional Semantics, 2000

Research paper thumbnail of Alternative Approaches to Fine-Grained Intensionality

Alternative Approaches to Fine-Grained Intensionality

Foundations of Intensional Semantics, 2000

Research paper thumbnail of Number Theory and Cardinality

Number Theory and Cardinality

Foundations of Intensional Semantics, 2000

Research paper thumbnail of A Higher-Order, Fine-Grained Intensional Logic

A Higher-Order, Fine-Grained Intensional Logic

Foundations of Intensional Semantics, 2000

Research paper thumbnail of Doing Natural Language Semantics in an Expressive First-Order Logic with Flexible Typing

We present Property Theory with Curry Typing (PTCT), an intensional first-order logic for natural... more We present Property Theory with Curry Typing (PTCT), an intensional first-order logic for natural language semantics. PTCT permits fine-grained specifications of meaning. It also supports polymorphic types and separation types. 1 We develop an intensional number theory within PTCT in order to represent proportional generalized quantifiers like most. We use the type system and our treatment of generalized quantifiers in natural language to construct a typetheoretic approach to pronominal anaphora that avoids some of the difficulties that undermine previous type-theoretic analyses of this phenomenon. 7.1 PTCT: Syntax of the basic theory The core language of PTCT consists of the following sub-languages:

Research paper thumbnail of Topic ... Comment

Topic ... Comment

Natural Language and Linguistic Theory

Research paper thumbnail of A higher-order intensional logic for ne-grained semantic representation

A higher-order intensional logic for ne-grained semantic representation

Research paper thumbnail of Achieving Expressive Completeness and Computational Efficiency for Underspecified Semantic Representations

The tension between expressive power and computational tractability poses an acute problem for th... more The tension between expressive power and computational tractability poses an acute problem for theories of underspecified semantic representation. In previous work we have presented an account of underspecified scope representations within Prop-erty Theory with Curry Typing (PTCT), an intensional first-order theory for natural language semantics. Here we show how filters applied to the underspecified-scope terms of PTCT permit both expressive completeness and the reduction of computa-tional complexity in a significant class of non-worst case scenarios.

Research paper thumbnail of How Furiously Can Colorless Green Ideas Sleep? Sentence Acceptability in Context

Transactions of the Association for Computational Linguistics

We study the influence of context on sentence acceptability. First we compare the acceptability r... more We study the influence of context on sentence acceptability. First we compare the acceptability ratings of sentences judged in isolation, with a relevant context, and with an irrelevant context. Our results show that context induces a cognitive load for humans, which compresses the distribution of ratings. Moreover, in relevant contexts we observe a discourse coherence effect that uniformly raises acceptability. Next, we test unidirectional and bidirectional language models in their ability to predict acceptability ratings. The bidirectional models show very promising results, with the best model achieving a new state-of-the-art for unsupervised acceptability prediction. The two sets of experiments provide insights into the cognitive aspects of sentence processing and central issues in the computational modeling of text and discourse.

Research paper thumbnail of The Stimulus: Determining the Nature of Primary Linguistic Data

The Stimulus: Determining the Nature of Primary Linguistic Data

Linguistic Nativism and the Poverty of the Stimulus

Research paper thumbnail of Automatic Bare Sluice Disambiguation in Dialogue

The capacity to recognise and interpret sluices|bare wh-phrases that ex- hibit a sentential meani... more The capacity to recognise and interpret sluices|bare wh-phrases that ex- hibit a sentential meaning|is essential to maintaining cohesive interaction between human users and a machine interlocutor in a dialogue system. In this paper we present a machine learning approach to sluice disambigu- ation in dialogue. Our experiments, based on solid theoretical consider- ations, show that applying machine learning techniques using

Research paper thumbnail of Local Constraints vs Local Economy 1

Local Constraints vs Local Economy 1

Research paper thumbnail of Goodman and Katz on synonymy

Goodman and Katz on synonymy

Philosophical Studies, 1976

Research paper thumbnail of Zeramim Hadishim Be-Filosofyah Ma Amarim Be-Filosofyah Uve-Toldoteha

Zeramim Hadishim Be-Filosofyah Ma Amarim Be-Filosofyah Uve-Toldoteha

Research paper thumbnail of The Revolution Confused: A Response To Our Critics

Natural Language and Linguistic Theory, 2000

Research paper thumbnail of Analytical philosophy knowledge

Analytical philosophy knowledge

Philosophia, 1971

Research paper thumbnail of Introduction to the special issue on computational anaphora resolution

Computational Linguistics, Dec 1, 2001

Anaphora accounts for cohesion in texts and is a phenomenon under active study in formal and comp... more Anaphora accounts for cohesion in texts and is a phenomenon under active study in formal and computational linguistics alike. The correct interpretation of anaphora is vital for natural language processing (NLP). For example, anaphora resolution is a key task in natural language interfaces, machine translation, text summarization, information extraction, question answering, and a number of other NLP applications. After considerable initial research, followed by years of relative silence in the early 1980s, anaphora resolution has attracted the attention of many researchers in the last 10 years and a great deal of successful work on the topic has been carried out. Discourseoriented theories and formalisms such as Discourse Representation Theory and Centering Theory inspired new research on the computational treatment of anaphora. The drive toward corpus-based robust NLP solutions further stimulated interest in alternative and/or data-enriched approaches. Last, but not least, application-driven research in areas such as automatic abstracting and information extraction independently highlighted the importance of anaphora and coreference resolution, boosting research in this area. Much of the earlier work in anaphora resolution heavily exploited domain and linguistic knowledge (Sidner 1979; Carter 1987; Rich and LuperFoy 1988; Carbonell and Brown 1988), which was difficult both to represent and to process, and which required considerable human input. However, the pressing need for the development of robust and inexpensive solutions to meet the demands of practical NLP systems encouraged many researchers to move away from extensive domain and linguistic knowledge and to embark instead upon knowledge-poor anaphora resolution strategies. A number of proposals in the 1990s deliberately limited the extent to which they relied on domain and/or linguistic knowledge and reported promising results in knowledge-poor operational environments (

Research paper thumbnail of Statistical Representation of Grammaticality Judgements: the Limits of N-Gram Models

Statistical Representation of Grammaticality Judgements: the Limits of N-Gram Models

Research paper thumbnail of A Critique of the Minimalist Program

Linguistics and Philosophy, 1997

In the past several years Chomsky has published a series of papers in which he proposes what he t... more In the past several years Chomsky has published a series of papers in which he proposes what he terms a minimalist view of syntax. 2 The model of grammar that emerges from this and related work represents a significant departure from the Government Binding (GB) theory of syntax, which dominated the Principles and Parameters (P&P) approach throughout the 1980's. 3 On the GB view, the syntactic well-formedness of a sentence depends upon the satisfaction of constraints that apply at one or more levels of representation in a derivation. The standard GB account recognizes a sequence of three syntactic levels of 1 Much of the research for this paper was done during the summer of 1995, when the alphabetically second author visited the first as a guest academic researcher in the

The feature structures of the lexical heads are specified in the lexicon. Notice that there are no functional heads corresponding to T or light v. Verbs appear in the lexicon with values assigned to their tense, person, and number features. Similarly, the number, person, and gender features of nouns are given in the lexicon. As saw is the head of the VP, its features are inherited by the VP by the Head Feature Principle, which states that the head features of a headed phrase are identical to those of its head daughter. The entire sentence inherits the features of the VP by the same principle. The SUBCAT feature of a head is a list of its complements. The SUBCAT feature for saw specifies a nominative NP subject and an accusative NP object. The feature structure of the complement daughter M ary unifies with that of the second element of saw's SUBCAT list, by the Subcategorization Principle. This results in accusative Case marking on Mary. The VP inherits this partially saturated SUBCAT feature, and the feature structure of the COMP-DTR John unifies with the subject element of the list at this level, yielding a fully saturated SUBCAT feature and nominative case on John. The tense and agreement features of the sentence are inherited from the verb through the VP that it heads. The Case features of the NP arguments are obtained from the SUBCAT feature of the verb through unification. Subject-verb agreement (as well as object-verb agreement) is also enforced through the SUBCAT feature of the verb.

Research paper thumbnail of Expressive Power and Formal Strength

Expressive Power and Formal Strength

Foundations of Intensional Semantics, 2000

Research paper thumbnail of Property Theory with Curry Typing

Property Theory with Curry Typing

Foundations of Intensional Semantics, 2000

Research paper thumbnail of Alternative Approaches to Fine-Grained Intensionality

Alternative Approaches to Fine-Grained Intensionality

Foundations of Intensional Semantics, 2000

Research paper thumbnail of Number Theory and Cardinality

Number Theory and Cardinality

Foundations of Intensional Semantics, 2000

Research paper thumbnail of A Higher-Order, Fine-Grained Intensional Logic

A Higher-Order, Fine-Grained Intensional Logic

Foundations of Intensional Semantics, 2000

Research paper thumbnail of Doing Natural Language Semantics in an Expressive First-Order Logic with Flexible Typing

We present Property Theory with Curry Typing (PTCT), an intensional first-order logic for natural... more We present Property Theory with Curry Typing (PTCT), an intensional first-order logic for natural language semantics. PTCT permits fine-grained specifications of meaning. It also supports polymorphic types and separation types. 1 We develop an intensional number theory within PTCT in order to represent proportional generalized quantifiers like most. We use the type system and our treatment of generalized quantifiers in natural language to construct a typetheoretic approach to pronominal anaphora that avoids some of the difficulties that undermine previous type-theoretic analyses of this phenomenon. 7.1 PTCT: Syntax of the basic theory The core language of PTCT consists of the following sub-languages:

Research paper thumbnail of Topic ... Comment

Topic ... Comment

Natural Language and Linguistic Theory

Research paper thumbnail of A higher-order intensional logic for ne-grained semantic representation

A higher-order intensional logic for ne-grained semantic representation

Research paper thumbnail of Achieving Expressive Completeness and Computational Efficiency for Underspecified Semantic Representations

The tension between expressive power and computational tractability poses an acute problem for th... more The tension between expressive power and computational tractability poses an acute problem for theories of underspecified semantic representation. In previous work we have presented an account of underspecified scope representations within Prop-erty Theory with Curry Typing (PTCT), an intensional first-order theory for natural language semantics. Here we show how filters applied to the underspecified-scope terms of PTCT permit both expressive completeness and the reduction of computa-tional complexity in a significant class of non-worst case scenarios.