Perceptual optimization of language: Evidence from American Sign Language (original) (raw)

Cognition, 2022

Abstract

If language has evolved for communication, languages should be structured such that they maximize the efficiency of processing. What is efficient for communication in the visual-gestural modality is different from the auditory-oral modality, and we ask here whether sign languages have adapted to the affordances and constraints of the signed modality. During sign perception, perceivers look almost exclusively at the lower face, rarely looking down at the hands. This means that signs articulated far from the lower face must be perceived through peripheral vision, which has less acuity than central vision. We tested the hypothesis that signs that are more predictable (high frequency signs, signs with common handshapes) can be produced further from the face because precise visual resolution is not necessary for recognition. Using pose estimation algorithms, we examined the structure of over 2000 American Sign Language lexical signs to identify whether lexical frequency and handshape probability affect the position of the wrist in 2D space. We found that frequent signs with rare handshapes tended to occur closer to the signer's face than frequent signs with common handshapes, and that frequent signs are generally more likely to be articulated further from the face than infrequent signs. Together these results provide empirical support for anecdotal assertions that the phonological structure of sign language is shaped by the properties of the human visual and motor systems.

Corrine Occhino hasn't uploaded this paper.

Let Corrine know you want this paper to be uploaded.

Ask for this paper to be uploaded.