Consonants and vowels: different roles in early language acquisition (original) (raw)
Related papers
Better Processing of Consonantal Over Vocalic Information in Word Learning at 16 Months of Age
Infancy, 2009
Previous research using the name-based categorization task has shown that 20month-old infants can simultaneously learn 2 words that only differ by 1 consonantal feature but fail to do so when the words only differ by 1 vocalic feature. This asymmetry was taken as evidence for the proposal that consonants are more important than vowels at the lexical level. This study explores this consonant-vowel asymmetry in 16-month-old infants, using an interactive word learning task. It shows that the pattern of the 16-month-olds is the same as that of the 20-month-olds. Infants succeeded with 1-feature consonantal contrasts (either place or voicing) but were at chance level with 1-feature vocalic contrasts (either place or height). These results thus contribute to a growing body of evidence establishing, from early infancy to adulthood, that consonants and vowels have different roles in lexical acquisition and processing. Numerous facts support the proposal by Nespor, Peiia, and Mehler (2003) of a different role of consonants (more involved at the lexical level) and vowels (more involved at the prosodic and syntactic and rule learning levels) in language acquisition and speech processing. This is particularly true for the proposal of a processing bias in favor of consonants over vowels at the lexical level (but see Toro, Bonatti, Nespor, & Mehler, 2008, for evidence of a vocalic bias in rule learning). For example, typological data show differences between these segments in terms
Structural generalizations over consonants and vowels in 11-month-old infants
Cognition, 2010
Recent research has suggested consonants and vowels serve different roles during language processing. While statistical computations are preferentially made over consonants but not over vowels, simple structural generalizations are easily made over vowels but not over consonants. Nevertheless, the origins of this asymmetry are unknown. Here we tested if a lifelong experience with language is necessary for vowels to become the preferred target for structural generalizations. We presented 11-month-old infants with a series of CVCVCV nonsense words in which all vowels were arranged according to an AAB rule (first and second vowels were the same, while the third vowel was different). During the test, we presented infants with new words whose vowels either followed or not, the aforementioned rule. We found that infants readily generalized this rule when implemented over the vowels. However, when the same rule was implemented over the consonants, infants could not generalize it to new instances. These results parallel those found with adult participants and demonstrate that several years of experience learning a language are not necessary for functional asymmetries between consonants and vowels to appear.
Semantics guide infants’ vowel learning: Computational and experimental evidence
Infant Behavior & Development, 2016
In their first year, infants' perceptual abilities zoom in on only those speech sound contrasts that are relevant for their language. Infants' lexicons do not yet contain sufficient minimal pairs to explain this phonetic categorization process. Therefore, researchers suggested a bottom-up learning mechanism: infants create categories aligned with the frequency distributions of sounds in their input. Recent evidence shows that this bottom-up mechanism may be complemented by the semantic context in which speech sounds occur, such as simultaneously present objects. To test this hypothesis, we investigated whether discrimination of a non-native vowel contrast improves when sounds from the contrast were paired consistently or randomly with two distinct visually presented objects, while the distribution of speech tokens suggested a single broad category. This was assessed in two ways: computationally, namely in a neural network simulation, and experimentally, namely in a group of 8-month-old infants. The neural network, trained with a large set of sound-meaning pairs, revealed that two categories emerge only if sounds are consistently paired with objects. A group of 49 real 8-month-old infants did not immediately show sensitivity to the pairing condition; a later test at 18 months with some of the same infants, however, showed that this sensitivity at 8 months interacted with their vocabulary size at 18 months. This interaction can be explained by the idea that infants with larger future vocabularies are more positively influenced by consistent training (and/or more negatively influenced by inconsistent training) than infants with smaller future vocabularies. This suggests that consistent pairing with distinct visual objects can help infants to discriminate speech sounds even when the auditory information does not signal a distinction. Together our results give computational as well as experimental support for the idea that semantic context plays a role in disambiguating phonetic auditory input.
Cognition, 2009
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one's native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in wordlabeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.
Learning words and learning sounds: Advances in language development
British journal of psychology (London, England : 1953), 2016
Phonological development is sometimes seen as a process of learning sounds, or forming phonological categories, and then combining sounds to build words, with the evidence taken largely from studies demonstrating 'perceptual narrowing' in infant speech perception over the first year of life. In contrast, studies of early word production have long provided evidence that holistic word learning may precede the formation of phonological categories. In that account, children begin by matching their existing vocal patterns to adult words, with knowledge of the phonological system emerging from the network of related word forms. Here I review evidence from production and then consider how the implicit and explicit learning mechanisms assumed by the complementary memory systems model might be understood as reconciling the two approaches.
Infant-directed speech supports phonetic category learning in English and Japanese
Cognition, 2007
Across the first year of life, infants show decreased sensitivity to phonetic differences not used in the native language . Cross-language speech perception: evidence for perceptual reorganization during the first year of life. Infant Behaviour and Development, 7,[49][50][51][52][53][54][55][56][57][58][59][60][61][62][63]. In an artificial language learning manipulation, Maye, Werker, and Gerken [Maye, J., Werker, J. F., & Gerken, L. (2002). Infant sensitivity to distributional information can affect phonetic discrimination. Cognition, 82(3), B101-B111] found that infants change their speech sound categories as a function of the distributional properties of the input. For such a distributional learning mechanism to be functional, however, it is essential that the input speech contain distributional cues to support such perceptual learning. To test this, we recorded Japanese and English mothers teaching words to their infants. Acoustic analyses revealed language-specific differences in the distributions of the cues used by mothers (or cues present in the input) to distinguish the vowels. The robust availability of these cues in maternal speech adds support to the hypothesis that distributional learning is an important mechanism whereby infants establish native language phonetic categories.