ANOTHER WAY TO MARK SYNTACTIC DEPENDENCIES: THE CASE FOR RIGHT-PERIPHERAL SPECIFIERS IN SIGN LANGUAGES (original) (raw)
Related papers
One grammar or two? Sign Languages and the Nature of Human Language
Linguistic research has identified abstract properties that seem to be shared by all languages-such properties may be considered defining characteristics. In recent decades, the recognition that human language is found not only in the spoken modality but also in the form of sign languages has led to a reconsideration of some of these potential linguistic universals. In large part, the linguistic analysis of sign languages has led to the conclusion that universal characteristics of language can be stated at an abstract enough level to include languages in both spoken and signed modalities. For example, languages in both modalities display hierarchical structure at sub-lexical and phrasal level, and recursive rule application. However, this does not mean that modality-based differences between signed and spoken languages are trivial. In this article, we consider several candidate domains for modality effects, in light of the overarching question: are signed and spoken languages subject to the same abstract grammatical constraints, or is a substantially different conception of grammar needed for the sign language case? We look at differences between language types based on the use of space, iconicity, and the possibility for simultaneity in linguistic expression. The inclusion of sign languages does support some broadening of the conception of human language-in ways that are applicable for spoken languages as well. Still, the overall conclusion is that one grammar applies for human language, no matter the modality of expression.
Body-anchored verbs and argument omission in two sign languages
Glossa 4(1), 42. , 2019
Using quantitative methods, we analyze naturalistic corpus data in two sign languages, German Sign Language and Russian Sign Language, to study subject-omission patterns. We find that, in both languages, the interpretation of null subjects depends on the type of the verb. With verbs signed on the signer's body (body-anchored verbs), null subjects are interpreted only as first person. With verbs signed in neutral space in front of the signer (neutral verbs), this restriction does not apply. We argue that this is an effect of iconicity: for body-anchored verbs, the signer's body is a part of the iconic representation of the verbal event, and by default the body is interpreted as referring to the signer, that is, as first person. We develop a formal analysis using a mechanism of mixed agreement, taking inspiration from Matushansky's (2013) account of mixed gender agreement in Russian. Specifically, we argue that body-anchored verbs bear an inherent feature that gives a first-person interpretation to null subjects. When a body-anchored verb is combined with an overt third-person subject, a feature mismatch occurs which is resolved in favor of the third person. Neutral verbs do not come with inherent feature-value specifications, thus allowing all person interpretations. We also explain how our analysis predicts the interpretation of null subjects in the context of role shift. With our account, we demonstrate that iconicity plays an active role in the grammar of sign languages, and we pin down the locus of the iconicity effect. While no iconic or modality-specific syntactic mechanisms are needed to account for the data, iconicity is argued to determine feature specification on a subset of sign language verbs.
Sign-speaking: The structure of simultaneous bimodal utterances
Applied Linguistics Review, 2017
We present data from a bimodal trilingual situation involving Indian Sign Language (ISL), Hindi and English. Signers are co-using these languages while in group conversations with deaf people and hearing non-signers. The data show that in this context, English is an embedded language that does not impact on the grammar of the utterances, while both ISL and Hindi structures are realised throughout. The data show mismatches between the simultaneously expressed ISL and Hindi, such that semantic content and/or syntactic structures are different in both languages, yet are produced at the same time. The data also include instances of different propositions expressed simultaneously in the two languages. This under-documented behaviour is called “sign-speaking” here, and we explore its implications for theories of multilingualism, code-switching, and bilingual language production.
Special Nature of Verbs in Sign Languages
Teanga, 2020
This paper is concerned with the special nature of sign language verbs, in particular to this research, Irish Sign Language verbs. We use Role and Reference Grammar to provide a definition of the structure of lexical entries that are sufficiently rich and robust in nature to represent Irish Sign Language verbs. Role and Reference Grammar takes language to be a system of communicative social action, and accordingly, analysing the communicative functions of grammatical structures plays a vital role in grammatical description and theory from this perspective. This work is part of research on the development of a linguistically motivated computational framework for Irish Sign Language. In providing a definition of a linguistically motivated computational model for Irish Sign Language we must be able to refer to the various articulators (hands, fingers, eyes, eyebrows etc.), as these are what we use to articulate the various phonemes, morphemes and lexemes of an utterance. Irish Sign Language is a visual gestural language. The fact that Irish Sign Language has no written or oral form means that, for us to represent an Irish Sign Language utterance in computational terms we must implement the use of a humanoid avatar capable of movement within threedimensional space. Here, we provide an account of the grammatical information that can be found within Irish Sign Language verbs. We use the Signs of Ireland corpus to access the relevant linguistic data pertinent to Irish Sign Language. Further to this we use ELAN software as an application tool, which allows us to view the corpus and collate relevant linguistic phenomena pertinent to Irish Sign Language. We utilise the Event Visibility Hypothesis in the development of our proposed lexicon architecture. The computational phonological parameters for Irish Sign Language manual features and non manual features are defined within a framework, which we refer to as the Sign_A framework, where the "A" within this
The relationship between verbal form and event structure in sign languages
Glossa, 2019
Whether predicates describe events as inherently bounded (telic) or unbounded (atelic) is usually understood to be an emergent property that depends on several factors; few, if any, spoken languages have dedicated morphology to mark the distinction. It is thus surprising that sign languages have been proposed to have dedicated morphology for telicity, and moreover that it takes a form which iconically reflects the underlying event structure-this is known as the "Event Visibility Hypothesis" (EVH) (Wilbur 2008). The EVH has been extended with claims about its universality in sign languages (Wilbur 2008; Malaia & Wilbur 2012), its gradient nature (Kuhn 2017), and its iconic transparency (Strickland et al. 2015). However, in this paper we argue that the status of this relationship between form and meaning remains an open question due to (a) lack of independent tests for telicity, (b) lack of lexical coverage, (c) lack of demonstration that formal expressions of telicity are morphological in nature, rather than a lexical property, and (d) inability to sufficiently dissociate telicity and perfectivity. We present new data coming from verbs that alternate in both form and meaning in ASL that is in line with the EVH, and conclude that while there is evidence supporting a morphological marker, the proposed form and telicity are not isomorphic in their distribution, significantly limiting the "visibility" of the event structure. We further propose that much of the related iconicity is the result of several independent factors also found in spoken languages, so that sign languages may be more similar to spoken languages than typically implied in this domain.