asli ozyurek - Academia.edu (original) (raw)

Papers by asli ozyurek

Research paper thumbnail of Put on a Secure Face: Do Infant Facial Features Affect Women's Mental Representation of Men's Attachment Style?

Research paper thumbnail of Multimodality and the origin of a novel communication system in face-to-face interaction

Royal Society Open Science

Face-to-face communication is multimodal at its core: it consists of a combination of vocal and v... more Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalization and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e. gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalizations only and given the opt...

Research paper thumbnail of Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension

Journal of speech, language, and hearing research : JSLHR, 2017

This study investigated whether and to what extent iconic co-speech gestures contribute to inform... more This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-...

Research paper thumbnail of Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues

Neuropsychologia, 2017

In everyday communication speakers often refer in speech and/or gesture to objects in their immed... more In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects.

Research paper thumbnail of Type of Iconicity Matters in the Vocabulary Development of Signing Children

Developmental Psychology, 2016

Recent research on signed as well as spoken language shows that the iconic features of the target... more Recent research on signed as well as spoken language shows that the iconic features of the target language might play a role in language development. Here, we ask further whether different types of iconic depictions modulate children's preferences for certain types of sign-referent links during vocabulary development in sign language. Results from a picture description task indicate that lexical signs with 2 possible variants are used in different proportions by deaf signers from different age groups. While preschool and school-age children favored variants representing actions associated with their referent (e.g., a writing hand for the sign PEN), adults preferred variants representing the perceptual features of those objects (e.g., upward index finger representing a thin, elongated object for the sign PEN). Deaf parents interacting with their children, however, used action- and perceptual-based variants in equal proportion and favored action variants more than adults signing to other adults. We propose that when children are confronted with 2 variants for the same concept, they initially prefer action-based variants because they give them the opportunity to link a linguistic label to familiar schemas linked to their action/motor experiences. Our results echo findings showing a bias for action-based depictions in the development of iconic co-speech gestures suggesting a modality bias for such representations during development. (PsycINFO Database Record

Research paper thumbnail of The Interplay between Joint Attention, Physical Proximity, and Pointing Gesture in Demonstrative Choice

Research paper thumbnail of Early language-specificity in Turkish children's caused motion event expressions in speech and gesture

Research paper thumbnail of Gestural viewpoint signals referent accessibility

Discourse Processes, 2013

Research paper thumbnail of Report on the Nijmegen Lectures 2004: Susan Goldin-Meadow 'The Many Faces of Gesture

Research paper thumbnail of The comprehension of gesture and speech

Research paper thumbnail of When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication

Research paper thumbnail of Processing of multi-modal semantic information: Insights from cross-linguistic comparisons and neurophysiological recordings

Research paper thumbnail of Event representations in signed languages

Research paper thumbnail of Gesture, language and brain

Research paper thumbnail of Representations of action, motion and location in sign space: A comparison of German (DGS) and Turkish (TID) Sign Language narratives

Research paper thumbnail of How spoken language shapes iconic gestures

Research paper thumbnail of Seeing and hearing meaning. Neural correlates for word versus picture integration into a sentence context

Research paper thumbnail of Getting to the Point: The Influence of Communicative Intent on the Kinematics of Pointing Gestures

Research paper thumbnail of Early language-specificity of children's event encoding in speech and gesture: evidence from caused motion in Turkish

Http Dx Doi Org 10 1080 01690965 2013 824993, Mar 26, 2014

Research paper thumbnail of Turkish- and English-speaking children display sensitivity to perceptual context in referring expressions they produce in speech and gesture

Lang Cognitive Process, 2011

Speakers choose a particular expression based on many factors, including availability of the refe... more Speakers choose a particular expression based on many factors, including availability of the referent in the perceptual context. We examined whether, when expressing referents, monolingual English- and Turkish-speaking children: (1) are sensitive to perceptual context, (2) express this sensitivity in language-specific ways, and (3) use co-speech gestures to specify referents that are underspecified. We also explored the mechanisms underlying children's sensitivity to perceptual context. Children described short vignettes to an experimenter under two conditions: The characters in the vignettes were present in the perceptual context (perceptual context); the characters were absent (no perceptual context). Children routinely used nouns in the no perceptual context condition, but shifted to pronouns (English-speaking children) or omitted arguments (Turkish-speaking children) in the perceptual context condition. Turkish-speaking children used underspecified referents more frequently than English-speaking children in the perceptual context condition; however, they compensated for the difference by using gesture to specify the forms. Gesture thus gives children learning structurally different languages a way to achieve comparable levels of specification while at the same time adhering to the referential expressions dictated by their language.

Research paper thumbnail of Put on a Secure Face: Do Infant Facial Features Affect Women's Mental Representation of Men's Attachment Style?

Research paper thumbnail of Multimodality and the origin of a novel communication system in face-to-face interaction

Royal Society Open Science

Face-to-face communication is multimodal at its core: it consists of a combination of vocal and v... more Face-to-face communication is multimodal at its core: it consists of a combination of vocal and visual signalling. However, current evidence suggests that, in the absence of an established communication system, visual signalling, especially in the form of visible gesture, is a more powerful form of communication than vocalization and therefore likely to have played a primary role in the emergence of human language. This argument is based on experimental evidence of how vocal and visual modalities (i.e. gesture) are employed to communicate about familiar concepts when participants cannot use their existing languages. To investigate this further, we introduce an experiment where pairs of participants performed a referential communication task in which they described unfamiliar stimuli in order to reduce reliance on conventional signals. Visual and auditory stimuli were described in three conditions: using visible gestures only, using non-linguistic vocalizations only and given the opt...

Research paper thumbnail of Visual Context Enhanced: The Joint Contribution of Iconic Gestures and Visible Speech to Degraded Speech Comprehension

Journal of speech, language, and hearing research : JSLHR, 2017

This study investigated whether and to what extent iconic co-speech gestures contribute to inform... more This study investigated whether and to what extent iconic co-speech gestures contribute to information from visible speech to enhance degraded speech comprehension at different levels of noise-vocoding. Previous studies of the contributions of these 2 visual articulators to speech comprehension have only been performed separately. Twenty participants watched videos of an actress uttering an action verb and completed a free-recall task. The videos were presented in 3 speech conditions (2-band noise-vocoding, 6-band noise-vocoding, clear), 3 multimodal conditions (speech + lips blurred, speech + visible speech, speech + visible speech + gesture), and 2 visual-only conditions (visible speech, visible speech + gesture). Accuracy levels were higher when both visual articulators were present compared with 1 or none. The enhancement effects of (a) visible speech, (b) gestural information on top of visible speech, and (c) both visible speech and iconic gestures were larger in 6-band than 2-...

Research paper thumbnail of Linking language to the visual world: Neural correlates of comprehending verbal reference to objects through pointing and visual cues

Neuropsychologia, 2017

In everyday communication speakers often refer in speech and/or gesture to objects in their immed... more In everyday communication speakers often refer in speech and/or gesture to objects in their immediate environment, thereby shifting their addressee's attention to an intended referent. The neurobiological infrastructure involved in the comprehension of such basic multimodal communicative acts remains unclear. In an event-related fMRI study, we presented participants with pictures of a speaker and two objects while they concurrently listened to her speech. In each picture, one of the objects was singled out, either through the speaker's index-finger pointing gesture or through a visual cue that made the object perceptually more salient in the absence of gesture. A mismatch (compared to a match) between speech and the object singled out by the speaker's pointing gesture led to enhanced activation in left IFG and bilateral pMTG, showing the importance of these areas in conceptual matching between speech and referent. Moreover, a match (compared to a mismatch) between speech and the object made salient through a visual cue led to enhanced activation in the mentalizing system, arguably reflecting an attempt to converge on a jointly attended referent in the absence of pointing. These findings shed new light on the neurobiological underpinnings of the core communicative process of comprehending a speaker's multimodal referential act and stress the power of pointing as an important natural device to link speech to objects.

Research paper thumbnail of Type of Iconicity Matters in the Vocabulary Development of Signing Children

Developmental Psychology, 2016

Recent research on signed as well as spoken language shows that the iconic features of the target... more Recent research on signed as well as spoken language shows that the iconic features of the target language might play a role in language development. Here, we ask further whether different types of iconic depictions modulate children's preferences for certain types of sign-referent links during vocabulary development in sign language. Results from a picture description task indicate that lexical signs with 2 possible variants are used in different proportions by deaf signers from different age groups. While preschool and school-age children favored variants representing actions associated with their referent (e.g., a writing hand for the sign PEN), adults preferred variants representing the perceptual features of those objects (e.g., upward index finger representing a thin, elongated object for the sign PEN). Deaf parents interacting with their children, however, used action- and perceptual-based variants in equal proportion and favored action variants more than adults signing to other adults. We propose that when children are confronted with 2 variants for the same concept, they initially prefer action-based variants because they give them the opportunity to link a linguistic label to familiar schemas linked to their action/motor experiences. Our results echo findings showing a bias for action-based depictions in the development of iconic co-speech gestures suggesting a modality bias for such representations during development. (PsycINFO Database Record

Research paper thumbnail of The Interplay between Joint Attention, Physical Proximity, and Pointing Gesture in Demonstrative Choice

Research paper thumbnail of Early language-specificity in Turkish children's caused motion event expressions in speech and gesture

Research paper thumbnail of Gestural viewpoint signals referent accessibility

Discourse Processes, 2013

Research paper thumbnail of Report on the Nijmegen Lectures 2004: Susan Goldin-Meadow 'The Many Faces of Gesture

Research paper thumbnail of The comprehension of gesture and speech

Research paper thumbnail of When gestures catch the eye: The influence of gaze direction on co-speech gesture comprehension in triadic communication

Research paper thumbnail of Processing of multi-modal semantic information: Insights from cross-linguistic comparisons and neurophysiological recordings

Research paper thumbnail of Event representations in signed languages

Research paper thumbnail of Gesture, language and brain

Research paper thumbnail of Representations of action, motion and location in sign space: A comparison of German (DGS) and Turkish (TID) Sign Language narratives

Research paper thumbnail of How spoken language shapes iconic gestures

Research paper thumbnail of Seeing and hearing meaning. Neural correlates for word versus picture integration into a sentence context

Research paper thumbnail of Getting to the Point: The Influence of Communicative Intent on the Kinematics of Pointing Gestures

Research paper thumbnail of Early language-specificity of children's event encoding in speech and gesture: evidence from caused motion in Turkish

Http Dx Doi Org 10 1080 01690965 2013 824993, Mar 26, 2014

Research paper thumbnail of Turkish- and English-speaking children display sensitivity to perceptual context in referring expressions they produce in speech and gesture

Lang Cognitive Process, 2011

Speakers choose a particular expression based on many factors, including availability of the refe... more Speakers choose a particular expression based on many factors, including availability of the referent in the perceptual context. We examined whether, when expressing referents, monolingual English- and Turkish-speaking children: (1) are sensitive to perceptual context, (2) express this sensitivity in language-specific ways, and (3) use co-speech gestures to specify referents that are underspecified. We also explored the mechanisms underlying children's sensitivity to perceptual context. Children described short vignettes to an experimenter under two conditions: The characters in the vignettes were present in the perceptual context (perceptual context); the characters were absent (no perceptual context). Children routinely used nouns in the no perceptual context condition, but shifted to pronouns (English-speaking children) or omitted arguments (Turkish-speaking children) in the perceptual context condition. Turkish-speaking children used underspecified referents more frequently than English-speaking children in the perceptual context condition; however, they compensated for the difference by using gesture to specify the forms. Gesture thus gives children learning structurally different languages a way to achieve comparable levels of specification while at the same time adhering to the referential expressions dictated by their language.