The effects of delayed reinforcement and a response-produced auditory stimulus on the acquisition of operant behavior in rats (original) (raw)
1994, The Psychological Record
The present experiment examined the effects of different delays of food delivery with and without a response-produced auditory stimulus on the acquisition of a spatially defined operant in rats. The operant was breaking a photoelectric beam located near the ceiling at the rear of the experimental chamber. In five groups of experimentally naive rats, the effects on photobeam-break responses of two different reinforcement delays (4 s and 10 s) with and without a response-produced auditory stimulus were compared during eight 1-hr sessions. In one control group (0-s delay), an immediate (i.e., 0.25-s) reinforcement contingency was in effect and in another control group (no food), responses were measured in the absence of any reinforcement contingencies. Results showed that rates of acquisition and responding were higher with shorter reinforcement delays and when there was a response-produced auditory stimulus. These results extend previous findings showing that neither direct shaping nor immediate reinforcement is necessary for operant conditioning. However, the present results demonstrate that the speed and extent of conditioning depends on the temporal relation between the response and the reinforcer. The findings are discussed in terms of a conditioned reinforcement analysis of the stimuli produced by operant responses. In his book, The Behavior of Organisms, B. F. Skinner (1938) described an experimental manipulation in which he compared the effects of 1-, 2-, 3-, and 4-s delays of reinforcement on the acquisition of lever pressing in rats. Although he reported that "the rates of acceleration are all comparable with those obtained with simultaneous reinforcement" (p. 73), the cumulative records showed that with only one exception at the 4-s delay, the rates of acquisition at the 2-, 3-, and 4-s delays were retarded when compared with simultaneous reinforcement. Skinner attributed these "slight irregularities" to procedural difficulties, namely, the problems inherent in what researchers today would speak of as resetting versus nonresetting delays (see Wilkenfield, Nickel, Blakely, & Poling, 1992). Until recently, only a few experiments (e.g., Harker, 1956; Logan, 1952; Seward & Weldon, 1953) had investigated the effects of delayed reinforcement on discrete responding such as lever pressing. Although these experiments reportedly showed that even short delays can retard or prevent acquisition, recent researchers (e.g., Critchfield & Lattal, 1993; Lattal & Gleeson, 1990) have criticized these early studies for their vague descriptions of training procedures and for not controlling for the possibility of immediate conditioned reinforcement. Interestingly, systematic examination of the effects of delayed reinforcement on the acquisition of discrete responding had not been carried out until very recently (e.g., Critchfield & Lattal, 1993;