Infants' Long-Term Memory for the Sound Patterns of Words and Voices (original) (raw)

The Distribution of Talker Variability Impacts Infants' Word Learning

Infants struggle to apply earlier-demonstrated sound-discrimination abilities to later word learning, attending to non-constrastive acoustic dimensions (e.g., Hay et al., 2015), and not always to contrastive dimensions (e.g., Stager & Werker, 1997). One hint about the nature of infants' difficulties comes from the observation that input from multiple talkers can improve word learning (Rost & McMurray, 2009). This may be because, when a single talker says both of the to-be-learned words, consistent talker's-voice characteristics make the acoustics of the two words more overlapping (Apfelbaum & McMurray, 2011). Here, we test that notion. We taught 14-month-old infants two similar-sounding words in the Switch habituation paradigm. The same amount of overall talker variability was present as in prior multiple-talker experiments, but male and female talkers said different words, creating a gender-word correlation. Under an acoustic-similarity account, correlated talker gender should help to separate words acoustically and facilitate learning. Instead, we found that correlated talker gender impaired learning of word-object pairings compared with uncorrelated talker gender—even when gender-word pairings were always maintained in test—casting doubt on one account of the beneficial effects of talker variability. We discuss several alternate potential explanations for this effect.

11-MONTH-OLDS' Knowledge of How Familiar Words Sound

Developmental Science, 2005

During the first year of life, infants' perception of speech becomes tuned to the phonology of the native language, as revealed in laboratory discrimination and categorization tasks using syllable stimuli. However, the implications of these results for the development of the early vocabulary remain controversial, with some results suggesting that infants retain only vague, sketchy phonological representations of words. Five experiments using a preferential listening procedure tested Dutch 11-month-olds' responses to word, nonword and mispronounced-word stimuli. Infants listened longer to words than nonwords, but did not exhibit this response when words were mispronounced at onset or at offset. In addition, infants preferred correct pronunciations to onset mispronunciations. The results suggest that infants' encoding of familiar words includes substantial phonological detail.

The effect of talker variability on word recognition in preschool children

Developmental Psychology, 1997

In a series of experiments, the authors investigated the effects of talker variability on children's word recognition. In Experiment 1, when stimuli were presented in the clear, 3-and 5-year-olds were less accurate at identifying words spoken by multiple talkers than those spoken by a single talker when the multiple-talker list was presented first. In Experiment 2, when words were presented in noise, 3-, 4-, and 5-year-olds again performed worse in the multiple-talker condition than in the single-talker condition, this time regardless of order; processing multiple talkers became easier with age. Experiment 3 showed that both children and adults were slower to repeat words from multiple-talker than those from single-talker lists. More important, children (but not adults) matched acoustic properties of the stimuli (specifically, duration). These results provide important new information about the development of talker normalization in speech perception and spoken word recognition.

Effects of the acoustic properties of infant-directed speech on infant word recognition

The Journal of the Acoustical Society of America, 2010

A number of studies have examined the acoustic differences between infant-directed speech ͑IDS͒ and adult-directed speech, suggesting that the exaggerated acoustic properties of IDS might facilitate infants' language development. However, there has been little empirical investigation of the acoustic properties that infants use for word learning. The goal of this study was thus to examine how 19-month-olds' word recognition is affected by three acoustic properties of IDS: slow speaking rate, vowel hyper-articulation, and wide pitch range. Using the intermodal preferential looking procedure, infants were exposed to half of the test stimuli ͑e.g., Where's the book?͒ in typical IDS style. The other half of the stimuli were digitally altered to remove one of the three properties under investigation. After the target word ͑e.g., book͒ was spoken, infants' gaze toward target and distractor referents was measured frame by frame to examine the time course of word recognition. The results showed that slow speaking rate and vowel hyper-articulation significantly improved infants' ability to recognize words, whereas wide pitch range did not. These findings suggest that 19-month-olds' word recognition may be affected only by the linguistically relevant acoustic properties in IDS.

Who's Talking Now? Infants' Perception of Vowels With Infant Vocal Properties

Psychological Science, 2014

Little is known about infants' abilities to perceive and categorize their own speech sounds or vocalizations produced by other infants. In the present study, prebabbling infants were habituated to /i/ ("ee") or /a/ ("ah") vowels synthesized to simulate men, women, and children, and then were presented with new instances of the habituation vowel and a contrasting vowel on different trials, with all vowels simulating infant talkers. Infants showed greater recovery of interest to the contrasting vowel than to the habituation vowel, which demonstrates recognition of the habituationvowel category when it was produced by an infant. A second experiment showed that encoding the vowel category and detecting the novel vowel required additional processing when infant vowels were included in the habituation set. Despite these added cognitive demands, infants demonstrated the ability to track vowel categories in a multitalker array that included infant talkers. These findings raise the possibility that young infants can categorize their own vocalizations, which has important implications for early vocal learning.

Resolving the (Apparent) Talker Recognition Paradox in Developmental Speech Perception

Infancy, 2019

The infant literature suggests that humans enter the world with impressive built-in talker processing abilities. For example, newborns prefer the sound of their mother's voice over the sound of another woman's voice, and well before their first birthday, infants tune in to language-specific speech cues for distinguishing between unfamiliar talkers. The early childhood literature, however, suggests that preschoolers are unable to learn to identify the voices of two unfamiliar talkers unless these voices are highly distinct from one another, and that adult-level talker recognition does not emerge until children near adolescence. How can we reconcile these apparently paradoxical messages conveyed by the infant and early childhood literatures? Here, we address this question by testing 16.5month-old infants (N = 80) in three talker recognition experiments. Our results demonstrate that infants at this age have difficulty recognizing unfamiliar talkers, suggesting that talker recognition (associating voices with people) is mastered later in life than talker discrimination (telling voices apart). We conclude that methodological differences across the infant and early childhood literatures-rather than a true developmental discontinuityaccount for the performance differences in talker processing between these two age groups. Related findings in other areas of developmental psychology are discussed. Soon after the auditory system becomes operational in the third trimester of pregnancy, humans demonstrate surprisingly sophisticated talker processing abilities. Fetuses react differently to their mother's voice than the voice of a female stranger

Learning words’ sounds before learning how words sound: 9-Month-olds use distinct objects as cues to categorize speech information

Cognition, 2009

One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one's native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in wordlabeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.

When do infants begin recognizing familiar words in sentences?

Journal of Child Language, 2014

The headturn preference paradigm The articulatory filter Babble Action-perception link a b s t r a c t This study compared the preference of 27 British English-and 26 Welsh-learning infants for nonwords featuring consonants that occur with equal frequency in the input but that are produced either with equal frequency (Welsh) or with differing frequency (British English) in infant vocalizations. For the English infants a significant difference in looking times was related to the extent of production of the nonword consonants. The Welsh infants, who showed no production preference for either consonant, exhibited no such influence of production patterns on their response to the nonwords. The results are consistent with a previous study that suggested that pre-linguistic babbling helps shape the processing of input speech, serving as an articulatory filter that selectively makes production patterns more salient in the input.