A Computational Model of Infant Speech Development (original) (raw)
Almost all theories of child speech development assume that an infant learns speech sounds by direct imitation, performing an acoustic matching of adult output to his own speech. Some theories also postulate an innate link between perception and production. We present a computer model which has no requirement for acoustic matching on the part of the infant and which treats speech production and perception as separate processes with no innate link. Instead we propose that the infant initially explores his speech apparatus and reinforces his own actions on the basis of sensory salience, developing vocal motor schemes [1]. As the infant's production develops, he will start to generate utterances which are sufficiently speechlike to provoke a linguistic response from its mother. Such, interactions are particularly important, because she is better qualified than he is to judge the quality of his speech. Her response to his vocal output is beneficial in a number of ways. Because she is a learned speaker, her experienced perceptive system can effectively evaluate the infant's output within the phonological system of the ambient language L1. Simply generating a salient response will tend to encourage the infant's production of a given utterance. More significantly, during imitative exchanges in which the mother reformulates the infant's speech, the infant can learn equivalence relations using simple associative mechanisms between his motor activity and his mother's acoustic output, and thus can solve the correspondence problem. Notice that the infant does not learn equivalence relations between his own acoustic output and that of his mother based on acoustic similarity. Any similarity based matching need only needs to be performed by his mother.
Sign up for access to the world's latest research.
checkGet notified about relevant papers
checkSave papers to use in your research
checkJoin the discussion with peers
checkTrack your impact