The syllable's role in speech segmentation (original) (raw)
Related papers
Role of the syllable in the processing of spoken English: Evidence from a nonword comparison task
1995
Previous research using monitoring tasks suggests that syllables do not play a role in the initial processing of speech by English listeners. The role of syllables in a different task, one involving the speeded comparison of 2 nonwords, was investigated. In 2 experiments, responses to nonword pairs that shared a complete syllable were significantly faster than responses to pairs that shared part of a syllable when the shared unit was at the beginning or in the middle of the nonwords. Results were mixed when the shared unit was at the end of the nonwords, possibly reflecting a confounding effect of rhyme. Findings suggest that syllabified representations of the nonwords may be used in a comparison task, even in English. Results are interpreted relative to different demands of the nonword comparison and monitoring tasks.
Syllable processing in English
We describe a reaction time study in which listeners detected word or nonword syllable targets (e.g. zoo, trel) in sequences consisting of the target plus a consonant or syllable residue (trelsh, trelshek). The pattern of responses differed from an earlier word-spotting study with the same material, in which words were always harder to find if only a consonant residue remained. The earlier results should thus not be viewed in terms of syllabic parsing, but in terms of a universal role for syllables in speech perception; words which are accidentally present in spoken input (e.g. sell in self) can be rejected when they leave a residue of the input which could not itself be a word.
The Role of the Syllable in Lexical Segmentation in French: Word-Spotting Data
Brain and Language, 2002
Three word-spotting experiments assessed the role of syllable onsets and offsets in lexical segmentation. Participants detected CVC words embedded initially or finally in bisyllabic nonwords with aligned (CVC.CVC) or misaligned (CV.CCVC) syllabic structure. A misalignment between word and syllable onsets (Experiment 1) produced a greater perceptual cost than a misalignment between word and syllable offsets (Experiments 2 and 3). These results suggest that listeners rely on syllable onsets to locate the beginning of words. The implications for theories of lexical access in continuous speech are discussed.
Units of speech perception: Phoneme and syllable
Journal of Verbal Learning and Verbal Behavior, 1975
Two detection experiments were conducted with short lists of synthetic speech stimuli where phoneme targets were compared to syllable targets. Unlike previous experiments heterogeneous lists of syllables and phonemes were used to remove possible bias created by homogenous lists. In Experiment I, targets that matched the response items in linguistic level were recognized faster than those that mismatched, whether the targets were syllables or phonemes. In Experiment II, where all targets and response items matched in level, phonemes were recognized faster than syllables when phonemes were relatively easy to identify, but the reverse held when phonemes were harder to identify. These results suggest that phonemes and syllables are equally basic to speech perception.
Boundaries versus Onsets in Syllabic Segmentation
Journal of Memory and Language, 2001
This study investigated the explicit syllabification of CVCV words in French. In a first syllable-reversal experiment, most responses corresponded to the expected canonical CV.CV segmentation, but a small proportion included the intervocalic consonant in both the first and second syllables, a result previously interpreted for English as indicating ambisyllabicity. Two further partial-repetition experiments showed that listeners systematically include the consonant in the onset of the second syllable, but also often include it in the offset of the first syllable. In addition, the assignment of the intervocalic consonant to the first and second syllables was differentially sensitive to the sonority of the consonant and to its spelling. We argue that the findings are inconsistent with the traditionally held boundary conception and instead support the view that distinct processes are involved in locating the onsets and the offsets of syllables. Onset determination is both more reliable and more dominant. Finally, we propose that syllable onsets serve as alignment points for the lexical search process in continuous spoken word recognition.
The Roles of Word Stress and Vowel Harmony in Speech Segmentation
Journal of Memory and Language, 1998
Three experiments investigated the role of word stress and vowel harmony in speech segmentation. Finnish has fixed word stress on the initial syllable, and vowels from a front or back harmony set cannot co-occur within a word. In Experiment 1, we replicated the results of Suomi, McQueen, and Cutler (1997) showing that Finns use a mismatch in vowel harmony as a word boundary cue when the target-initial syllable is unstressed. Listeners found it easier to detect words such as HYmy in PUhymy (harmony mismatch) than in PYhymy (no harmony mismatch). In Experiment 2, words had stressed target-initial syllables (HYmy as in pyHYmy or puHYmy). Reaction times were now faster and the vowel harmony effect was greatly reduced. In Experiment 3, Finnish, Dutch, and French listeners learned to segment an artificial language. Performance was best when the phonological properties of the artificial language matched those of the native one. Finns profited, as in the previous experiments, from vowel harmony and word-initial stress; Dutch profited from word-initial stress, and French did not profit either from vowel-harmony or from word-initial stress. Vowel disharmony and word-initial stress are thus language-specific cues to word boundaries. ᭧ 1998 Academic Press One of the major issues in spoken word problem is to understand how listeners segment the continuous speech signal into dis-recognition concerns the detection of word boundaries in continuous speech. The central crete words when there are no reliable acoustic cues that signal the beginnings of words. A number of alternative ideas have appeared in The research was partly supported by a grant from the the literature that point toward a possible solu-Human Frontier of Science Programme ''Processing consetion. A major division can be made between quences of contrasting language phonologies.'' The research of Jean Vroomen has been made possible by a fellowship proposals that emphasize acoustic/phonetic of the Royal Netherlands Academy of Arts and Sciences. cues and those that focus on lexical or contex-The research of Jyrki Tuomainen was financially supported tual processes. In the former, word boundaries by the Academy of Finland. Research was also partly supare located on the basis of local perceptual ported by the Ministry of Education of the Belgian Frenchfeatures such as the presence of glottal stops, speaking Community, Concerted Research Action ''Language processing in different modalities: Comparative ap-laryngealized voicing, increased aspiration, or proaches.'' We thank Leo Vogten from the IPO, Eindhoven for help in preparing the stimuli of Experiment 3 and Juan Address correspondence and reprint requests to Jean SeguıB for help in testing the French subjects. We also thank
Cues to speech segmentation: Evidence from juncture misperceptions and word spotting
1996
Understanding spoken language requires that listeners segment a spoken utterance into words or into some smaller unit from which the lexicon can be accessed. A major difficulty in speech segmentation is the fact that speakers do not provide stable acoustic cues to indicate boundaries between words or segments. At present, it is therefore unclear as to how to start a lexical access attempt in the absence of a reliable cue about where to start. Several decades of speech research have not yet led to a widely accepted solution for the speech segmentation problem. So far, three proposals have appeared in the literature that are of direct relevance here. One is that the continuous speech stream is categorized into discrete segments which then mediate between the acoustic signal and the lexicon. The second proposal is that there is an explicit mechanism that targets locations in the speech stream where word boundaries are likely to occur. The third is that word segmentation is a by-product of lexical competition. In the present study, these alternatives are considered.