Signed Language Research Papers - Academia.edu (original) (raw)

لقد كانت هوية الذات في رؤيا الشاعر مرقاتنا لاكتناه عالمه الشعري، بوصفه منتوجا يعكس صورتين متناظرين من حيث الشكل، ومتباينتين من حيث المضمون، تكمن الأولى في صورة زيف الواقع، في حين تُعنى الثانية بما فوق الواقع في أكوانه الممكنة، والصورتان ـ... more

لقد كانت هوية الذات في رؤيا الشاعر مرقاتنا لاكتناه عالمه الشعري، بوصفه منتوجا يعكس صورتين متناظرين من حيث الشكل، ومتباينتين من حيث المضمون، تكمن الأولى في صورة زيف الواقع، في حين تُعنى الثانية بما فوق الواقع في أكوانه الممكنة، والصورتان ـ معًا ـ تدفعان الشاعر إلى التماس وسيلة تعبير مختلفة في نتاجه الكشفي، حين اتخذ من أصالته الإبداعية، وهويته الروحية ما ينم عن فطرته الصافية إنسانيةً، ومزيَّته الخالصة تورعًا، ورؤيته الإبداعية كشفًا، سواء بدافع رفع الشيء عما يواريه في أنساق الواقع، أم بدافع معرفة ما وراء الإدراك من خلال صفات ذات الحق وجودًا وشهودًا.

Using two hands as two articulators comes from the nature of sign languages as visual-gestural languages. It means the modality of a language is a key figure in this regard. I discussed different examples of manual simultaneity in Persian... more

Using two hands as two articulators comes from the nature of sign languages as visual-gestural languages. It means the modality of a language is a key figure in this regard. I discussed different examples of manual simultaneity in Persian (Iranian) Sign Language based on limited data from this language. I showed that different types of simultaneous constructions exist in Iranian SL some of which have been showed before in other sign languages and some have not.
Perseveration has shown that has functions in prosody and discourse of Iranian SL as in other studied sign languages. The duration which the non-dominant hand is held with a sign indicates different roles of the non-dominant hand activity as the second articulator in the language. It is sometimes an indicator for phonological phrase in the prosodic structure of the language while in other cases shows the topic which is discussed in a discourse. The spreading of the sign hold on the non-dominant hand is an important factor in prosodic construction. It can indicate the phonological phrase which is part of the prosodic structure of the language.
The meaningful perseverations or sign fragments might indicate the topic of the sentence or topic of the discourse and may represent background information versus foreground information which is produced by the dominant hand. The non-dominant hand also may represent a classifier construction which has a morphemic role and it is independent of the dominant hand (Sandler & Lillo-Martin 2006). This classifier construction can function as a carrier of background information while the other hand produces foreground information.

Signed languages are natural human languages used by deaf people around the world as their primary language. This chapter explores the linguistic study of signed language, their linguistic properties, and aspects of their genetic and... more

Signed languages are natural human languages used by deaf people around the world as their primary language. This chapter explores the linguistic study of signed language, their linguistic properties, and aspects of their genetic and historical relationships. The chapter focuses on historical change that has occurred in signed languages, showing that the same linguistic processes that contribute to historical change in spoken languages, such as lexicalization, grammaticization, and semantic change, contribute to historical change in signed languages. Historical influences unique to signed languages, such as the educational approach of borrowing and adapting signs and an effort to create a system of representing the surrounding spoken/written language and of the incorporation of lexicalized fingerspelling are also discussed.

Structuralism has pervaded the history of signed language linguistics. This paper presents the case for adopting a usage-based approach to describe the grammars of signed languages. Two usage-based models are presented: an exemplar model... more

Structuralism has pervaded the history of signed language linguistics. This paper presents the case for adopting a usage-based approach to describe the grammars of signed languages. Two usage-based models are presented: an exemplar model as described by Bybee (2006) and cognitive grammar as developed by Langacker (2008). Data is presented to demonstrate the process of grammaticization along two routes. The implications of a usage-based approach for understanding the relationship between language and gesture is explored.

Streszczenie Środkami dydaktycznymi słuzącymi do nauki znaków polskiego języka migowego do niedawna były róznorodne slowniki. Miały one postać tekstu lub grafik bądź tez róznych ich kombinacji. Podjęto także próby prezentacji znaków... more

A renewed interest in understanding the role of iconicity in the structure and processing of signed languages is hampered by the conflation of iconicity and transparency in the definition and operationalization of iconicity as a variable.... more

A renewed interest in understanding the role of iconicity in the structure and processing of signed languages is hampered by the conflation of iconicity and transparency in the definition and operationalization of iconicity as a variable. We hypothesize that iconicity is fundamentally different than transparency since it arises from individuals' experience with the world and their language, and is subjectively mediated by the signers' construal of form and meaning. We test this hypothesis by asking American Sign Language (ASL) signers and German Sign Language (DGS) signers to rate iconicity of ASL and DGS signs. Native signers consistently rate signs in their own language as more iconic than foreign language signs. The results demonstrate that the perception of iconicity is intimately related to language-specific experience. Discovering the full ramifications of iconicity for the structure and processing of signed languages requires operationalizing this construct in a manner that is sensitive to language experience.

Les études sur l’histoire des langues des signes (LS) se concentrent sur le XIXe siècle, lorsque l’instruction s’est généralisée en Occident et que se développait celle des sourds. Les diverses institutions spécialisées, rassemblant les... more

Les études sur l’histoire des langues des signes (LS) se concentrent sur le XIXe siècle, lorsque l’instruction s’est généralisée en Occident et que se développait celle des sourds. Les diverses institutions spécialisées, rassemblant les jeunes sourds, ont favorisé le développement des LS. La remise en cause de ces institutions, assimilées à des ghettos qui ne préparent pas l’intégration sociale et ne remplissent pas la mission d’instruction attendue, amplifie celle des LS, ciblées dès le début par les tenants de méthodes orales. À leur tour, ces dernières, s’imposant de la fin du XIXe siècle aux années 1980, sont accusées d’empêcher les sourds de s’approprier naturellement une langue, d’en faire des illettrés en difficulté d’insertion sociale et d’alimenter leur mal être. L’écho récurrent, à plus de cent cinquante ans d’écart, d’un questionnement qui ne cesse pas entre LS ou oral, diversité des méthodes d’enseignement ou exclusivisme, instituts spécialisés ou intégration-inclusion, montre que le sujet est plus complexe que ne le pose en prémisse la focalisation sur l’un ou l’autre de ces paramètres. Une diversité et une souplesse revendiquées dès le XIXe siècle.

This paper presents a usage-based, Cognitive Grammar analysis of Place as a symbolic structure in signed languages. We suggest that many signs are better viewed as constructions in which schematic or specific formal properties are... more

This paper presents a usage-based, Cognitive Grammar analysis of Place as a symbolic structure in signed languages. We suggest that many signs are better viewed as constructions in which schematic or specific formal properties are extracted from usage events alongside specific or schematic meaning. We argue that pointing signs are complex constructions composed of a pointing device and a Place, each of which are symbolic structures having form and meaning. We extend our analysis to antecedent-anaphora constructions and directional verb constructions. Finally, we discuss how the usage-based
approach suggests a new way of understanding the relationship between language and gesture.

This essay considers the acquisition of sign languages as first languages. Most deaf children are born to hearing parents, but a minority have deaf parents. Deaf children of deaf parents receive early access to a conventional sign... more

This essay considers the acquisition of sign languages as first languages. Most deaf children are born to hearing parents, but a minority have deaf parents. Deaf children of deaf parents receive early access to a conventional sign language. The time course of acquisition in these children is compared to the developmental milestones in children learning spoken languages. The two language modalities--the oral-aural modality of speech and the visual-gestural modality of sign--place differing constraints on languages and offer differing resources to languages. Possible modality effects on first-language acquisition are considered. Historically, many deaf infants born to hearing parents have had little access to a conventional language. However, these children sometimes elaborate "home sign" systems. Lastly, the role of early experience in language acquisition is considered. Deaf children of hearing parents are immersed in a first language at varying ages, enabling a test of the critical-period hypothesis.

This paper presents a study of modality in Iranian Sign Language (ZEI) from a cognitive perspective, aimed at analyzing two linguistic channels: facial and manual. While facial markers and their grammatical functions have been studied in... more

This paper presents a study of modality in Iranian Sign Language (ZEI) from a cognitive perspective, aimed at analyzing two linguistic channels: facial and manual. While facial markers and their grammatical functions have been studied in some sign languages, we have few detailed analyses of the facial channel in comparison with the manual channel in conveying modal concepts. This study focuses on the interaction between manual and facial markers. A description of manual modal signs is offered. Three facial markers and their modality values are also examined: squinted eyes, brow furrow, and downward movement of lip corners (horseshoe mouth). In addition to offering this first descriptive analysis of modality in ZEI, this paper also applies the Cognitive Grammar model of modality, the Control Cycle, and the Reality Model, classifying modals into two kinds, effective and epistemic. It is suggested that effective control, including effective modality, tends to be expressed on the hands, while facial markers play an important role in marking epistemic assessment, one manifestation of which is epistemic modality. ZEI, like some other sign languages, exhibits an asymmetry between the number of manual signs and facial markers expressing epistemic modality: while the face can be active in the expression of effective modality, it is commonly the only means of expressing epistemic modality. By positing an epis-temic core in effective modality, Cognitive Grammar provides a theoretical basis for these findings.

A receptive, multiple-choice test of ASL synonyms was administered to Deaf children in order to determine both their vocabulary devel- opment and the metalinguistic skills necessary for them to identify synonyms.A total of 572 Deaf... more

A receptive, multiple-choice test of ASL synonyms was administered to Deaf children in order to determine both their vocabulary devel- opment and the metalinguistic skills necessary for them to identify synonyms.A total of 572 Deaf children who were 4;0–18;0 years of age were tested: 449 Deaf children of hearing parents (DCHP) and 123 Deaf children of Deaf parents (DCDP).The performance of both groups improved with age, with DCDP scoring higher than DCHP from 8–9 years old and up. An error analysis showed a decrease of phonological foil choices with increasing age in both groups. Learn- ers in both groups relied more on semantic knowledge and less on phonological knowledge for this semantic task as they became older, which is the same pattern observed for typically developing hearing children acquiring a spoken language.This indicates that DCHP and DCDP resemble hearing children in the strategies they use to identify synonyms. In addition, DCHP follow the same developmental trajectory as DCDP but are delayed, which is consistent with the less than ideal levels of language input they receive.

Research on hearing bilinguals has established that bilinguals never fully suppress the non-target language during lexical processing (Dijkstra & Van Heuven, 2002; Marian & Spivey, 2003; Van Hell & Dijkstra, 2002). Of various... more

Research on hearing bilinguals has established that bilinguals never fully suppress the non-target language during lexical processing (Dijkstra & Van Heuven, 2002; Marian & Spivey, 2003; Van Hell & Dijkstra, 2002). Of various between-language factors that may influence word recognition in bilinguals, cognate status appears to account for the most variability (Lemhöfer et al., 2008). But suppose a bilingual knows two languages that do not share cognates? Current theorizing would predict that in the absence of cognates, lexical processing should not be influenced by the non-target language.
We were able to test this hypothesis by investigating effects of sign language knowledge on written word recognition. In spite of a lack of cognates in American Sign Language (ASL) and English, cross-language activation effects were recently documented in deaf (Morford et al., 2011) and hearing (Shook et al., 2012) ASL-English bilinguals. This panel explores the nature of cross-language influences in different populations of deaf and hearing bilinguals who are fluent in a signed language.
The first paper in our panel clarifies the basic finding of cross-language activation effects in deaf ASL-English bilinguals, and extends them to two new populations of signing bilinguals: deaf ASL-dominant bilinguals and hearing English-dominant bilinguals. These effects were found using a monolingual English task in which participants decided whether two words were semantically related. Unbeknownst to participants, half of the stimuli had phonologically related translation equivalents in ASL, and half had unrelated translation equivalents. Because the task does not require translating the stimuli into ASL, effects of the ASL manipulation are a strong indication that bilinguals access the ASL translations during English word recognition.
The second paper in our panel reports the results of a study that investigates whether deaf bilinguals in Germany also exhibit cross-language activation effects. The study modified the semantic judgment task for use with deaf German Sign Language (DGS)-German bilinguals. Results indicate that DGS-German bilinguals activate DGS signs during German word recognition. Implications of these results for reading development in deaf German bilinguals are discussed.
The third paper in our panel explores the time course of cross-language activation in deaf ASL-English bilinguals. When deaf bilinguals see a written word, does activation spread directly to ASL phonological forms, or are ASL forms only activated after the semantics of the English word are activated? The paradigm used by Morford et al. (2011) included a 1 second stimulus onset asynchrony (SOA), allowing ample time for activation from the English word to spread to semantics, and then from semantics to ASL phonological forms. We present results from a replication study in which SOA was manipulated such that participants had 750 ms SOA in one condition, but only 250 SOA in a second condition. We replicated the cross-language activation effect at both SOAs, strongly indicating that activation spreads directly from English words to ASL phonological forms.

In humans the two cerebral hemispheres of the brain are functionally specialized with the left hemisphere predominantly mediating language skills. The basis of this lateralization has been proposed to be differential localization of the... more

In humans the two cerebral hemispheres of the brain are functionally specialized with the left hemisphere predominantly mediating language skills. The basis of this lateralization has been proposed to be differential localization of the linguistic, the motoric, and the symbolic properties of language. To distinguish among these possibilities, lateralization of spoken language, signed language, and nonlinguistic gesture have been compared in deaf and hearing individuals. This analysis, plus additional clinical findings, support a linguistic basis of left hemisphere specialization.

Infant vocal babbling has been assumed to be a speech-based phenomenon that reflects the maturation of the articulator-y apparatus responsible for spoken language produc- tion. Manual babbling has now been reported to occur in deaf... more

Infant vocal babbling has been assumed to be a speech-based phenomenon that reflects the maturation of the articulator-y apparatus responsible for spoken language produc-
tion. Manual babbling has now been reported to occur in deaf children exposed to signed languages from birth. The similarities between manual and vocal babbling suggest that babbling is a product of an amodal, brain-based language capacity under maturational control, in which phonetic and syllabic units are produced by the infant as a first step toward building a mature linguistic system. Contrary to prevailing accounts of the neurological basis of babbling in language ontogeny, the speech modality is not critical in babbling. Rather, babbling is tied to the abstract linguistic structure of language and to an expressive capacity capable of processing different types of signals (signed or spoken).

Wartykule przedstawiono zasady działania dwóch multimedialnych programów: SITex i SITur, zawierających translator polskiego języka migowego Thetos – przekładającego teksty na gesty przy zastosowaniu techniki animacji. Prace były... more

Wartykule przedstawiono zasady działania dwóch multimedialnych programów: SITex i SITur,
zawierających translator polskiego języka migowego Thetos – przekładającego teksty na gesty przy zastosowaniu techniki animacji. Prace były realizowane w ramach projektu badawczego w latach 2008–2010. W zarysie omówiono specyfikę adresatów programów – społeczności osób niesłyszących kulturowo, dla których podstawowym sposobem komunikowania się jest język migowy. Na podstawie przeglądu literatury, jak również wyników własnych badań przedstawiono czynniki ograniczające udział w turystyce osób niesłyszących
oraz przykłady zasad i dobrych praktyk związanych z dostosowaniem oferty turystycznej do potrzeb użytkowników języka migowego. Słowa kluczowe: głusi, osoby niesłyszące, osoby niesłyszące kulturowo, turystyka osób niepełnosprawnych, turystyka osób o specjalnych potrzebach, multimedialny system informacji turystycznej, język migowy, translator polskiego języka migowego Thetos

This dissertation uses corpus data from ASL and Libras (Brazilian Sign Language), to investigate the distribution of a series of static and dynamic handshapes across the two languages. While traditional phonological frameworks argue... more

This dissertation uses corpus data from ASL and Libras (Brazilian Sign Language), to investigate the distribution of a series of static and dynamic handshapes across the two languages. While traditional phonological frameworks argue handshape distribution to be a facet of well-formedness constraints and articulatory ease (Brentari, 1998), the data analyzed here suggests that the majority of handshapes cluster around schematic form-meaning mappings. Furthermore, these schematic mappings are shown to be motivated by both language-internal and language-external construals of formal articulatory properties and embodied experiential gestalts. Usage-based approaches to phonology (Bybee, 2001) and cognitively oriented constructional approaches (Langacker, 1987) have recognized that phonology is not modular. Instead, phonology is expected to interact with all levels of grammar, including semantic association. In this dissertation I begin to develop a cognitive model of phonology which views...

Introduction The CEFR offers a framework for language teaching, learning and assessment for L2 learners. Importantly, the CEFR draws on a learner’s communicative language competence rather than linguistic competence (e.g. vocabulary,... more

Introduction The CEFR offers a framework for language teaching, learning and assessment for L2 learners. Importantly, the CEFR draws on a learner’s communicative language competence rather than linguistic competence (e.g. vocabulary, grammar). As such, the implementation of the CEFR in our four years bachelor program Teacher of Sign Language of the Netherlands (NGT) caused a shift in didactic approach from grammar-based to communication-centered. It has been acknowledged that didactic approaches associated with the CEFR are scarcely documented (Figueras, 2012) and the effectiveness on learner outcomes have not been investigated systematically. Moreover, for many languages the levels of the CEFR are not supported by empirical evidence from L2 learner data (Hulstijn, 2007). Purpose We will i) describe our communication-centered approach in detail and iii) present some preliminary findings on the effectiveness of this approach on student’s outcomes. Method We followed four student coho...

Evidence for a Developmental Language Disorder (DLD) could surface with language processing/comprehension, language production, or a combination of both. Whereas, various studies have described cases of DLD in signing deaf children, there... more

Evidence for a Developmental Language Disorder (DLD) could surface with language processing/comprehension, language production, or a combination of both. Whereas, various studies have described cases of DLD in signing deaf children, there exist few detailed examples of deaf children who exhibit production issues in the absence of processing or comprehension challenges or motor deficits. We describe such a situation by detailing a case study of "Gregory", a deaf native signer of American Sign Language (ASL). We adopt a detailed case-study methodology for obtaining information from Gregory's family and school, which we combine with linguistic and non-linguistic data that we collected through one-on-one sessions with Gregory. The results provide evidence of persistent issues with language production (in particular, atypical articulation of some phonological aspects of signs), yet typical comprehension skills and unremarkable fine-motor motor skills. We also provide a snapshot of Gregory's rich linguistic environment, which we speculate, may serve to attenuate his production deficit. The results of this study have implications for the provision of language services for signing deaf children in schools and also for language therapists. We propose that language therapists who are fluent in signed language be trained to work with signing children.

Signed languages are natural human languages used by deaf people around the world as their primary language. This chapter explores the linguistic study of signed language, their linguistic properties, and aspects of their genetic and... more

Signed languages are natural human languages used by deaf people around the world as their primary language. This chapter explores the linguistic study of signed language, their linguistic properties, and aspects of their genetic and historical relationships. The chapter focuses on historical change that has occurred in signed languages, showing that the same linguistic processes that contribute to historical change in spoken languages, such as lexicalization, grammaticization, and semantic change, contribute to historical change in signed languages. Historical influences unique to signed languages, such as the educational approach of borrowing and adapting signs and an effort to create a system of representing the surrounding spoken/written language and of the incorporation of lexicalized fingerspelling are also discussed.

Invited talk to be delivered at the Gallaudet University Spring 2016 Deaf History Lecture Series.

This paper presents a usage-based, Cognitive Grammar analysis of Place as a symbolic structure in signed languages. We suggest that many signs are better viewed as constructions in which schematic or specific formal properties are... more

This paper presents a usage-based, Cognitive Grammar analysis of Place as a symbolic structure in signed languages. We suggest that many signs are better viewed as constructions in which schematic or specific formal properties are extracted from usage events alongside specific or schematic meaning. We argue that pointing signs are complex constructions composed of a pointing device and a Place, each of which are symbolic structures having form and meaning. We extend our analysis to antecedent-anaphora constructions and directional verb constructions. Finally, we discuss how the usage-based approach suggests a new way of understanding the relationship between language and gesture.

A lista completa com informações dos autores está no final do artigo RESUMO Este estudo buscou investigar o desempenho de 67 alunos surdos dos três primeiros anos do Ensino Fundamental, usuários de Libras-Língua Brasileira de Sinais, na... more

A lista completa com informações dos autores está no final do artigo RESUMO Este estudo buscou investigar o desempenho de 67 alunos surdos dos três primeiros anos do Ensino Fundamental, usuários de Libras-Língua Brasileira de Sinais, na realização de cálculos de adição e subtração, a fim de identificar se o uso de uma língua de modalidade visuoespacial interfere na maneira de executar tais cálculos. Objetivou-se descrever as principais estratégias utilizadas pelos estudantes e verificar erros frequentemente cometidos, relacionando-os às estratégias empregadas. As análises, quantitativas e qualitativas, revelaram que houve progressão acadêmica com a escolaridade e maior dificuldade na resolução dos cálculos de subtração. Identificou-se o uso de formas particulares na resolução de cálculos que podem ser atribuídas à modalidade visuoespacial da Língua de Sinais: algoritmos sinalizados. Essas estratégias são oriundas de uma forma de representar, contar e operar com os números, por surdos usuários de Línguas de Sinais, trazendo em si uma característica própria que as distinguem das formas de operar com os dedos, por ouvintes.

Evidence for a Developmental Language Disorder (DLD) could surface with language processing/comprehension, language production, or a combination of both. Whereas, various studies have described cases of DLD in signing deaf children, there... more

Evidence for a Developmental Language Disorder (DLD) could surface with language processing/comprehension, language production, or a combination of both. Whereas, various studies have described cases of DLD in signing deaf children, there exist few detailed examples of deaf children who exhibit production issues in the absence of processing or comprehension challenges or motor deficits. We describe such a situation by detailing a case study of "Gregory", a deaf native signer of American Sign Language (ASL). We adopt a detailed case-study methodology for obtaining information from Gregory's family and school, which we combine with linguistic and non-linguistic data that we collected through one-on-one sessions with Gregory. The results provide evidence of persistent issues with language production (in particular, atypical articulation of some phonological aspects of signs), yet typical comprehension skills and unremarkable fine-motor motor skills. We also provide a snap...

This dissertation uses corpus data from ASL and Libras (Brazilian Sign Language), to investigate the distribution of a series of static and dynamic handshapes across the two languages. While traditional phonological frameworks argue... more

This dissertation uses corpus data from ASL and Libras (Brazilian Sign Language), to investigate the distribution of a series of static and dynamic handshapes across the two languages. While traditional phonological frameworks argue handshape distribution to be a facet of well-formedness constraints and articulatory ease (Brentari, 1998), the data analyzed here suggests that the majority of handshapes cluster around schematic form-meaning mappings. Furthermore, these schematic mappings are shown to be motivated by both language-internal and language-external construals of formal articulatory properties and embodied experiential gestalts.
Usage-based approaches to phonology (Bybee, 2001) and cognitively oriented constructional approaches (Langacker, 1987) have recognized that phonology is not modular. Instead, phonology is expected to interact with all levels of grammar, including semantic association. In this dissertation I begin to develop a cognitive model of phonology which views phonological content as similar in kind to other constructional units of language. I argue that, because formal units of linguistic structure emerge from the extraction of commonalities across usage events, phonological form is not immune from an accumulation of semantic associations. Finally, I demonstrate that appealing to such approaches allows one to account for both idiosyncratic, unconventionalized mappings seen in creative language use, as well as motivation in highly conventionalized form-meaning associations.