Intertrial-interval effects on sensitivity (A') and response bias (B") in a temporal discrimination by rats (original) (raw)
Related papers
Biasing temporal judgments in rats, pigeons, and humans
Models of interval timing typically include a response threshold to account for temporal production. The present study sought to evaluate the dependent concurrent fixed-interval fixed-interval schedule of reinforcement as a tool for selectively isolating the response threshold in rats, pigeons, and humans. In this task, reinforcement is available either at one location after a short delay or at another location at a longer delay. Because the reinforced location is not signaled, subjects normally respond on the first location and, if reinforcement is not delivered, then switch to the second location. The latency to switch between locations served as the primary dependent measure. After training rats, pigeons, and humans with equal reinforcement magnitudes in the short and long delays, the magnitude of reinforcement was increased threefold on the long-delay location. Consistent with model predictions, this biasing procedure decreased estimates of the response threshold of rats and pigeons, but also reduced temporal control in these species and increased response-threshold estimates in humans. Human and pigeon performance also suggested a magnitude-induced increase in the speed of the internal clock. Collectively, these results suggest that differences in reinforcement magnitude between response alternatives appear to modulate the response threshold, but not selectively, and may provide guidance for better isolating response threshold effects in humans.
A comparison of empirical and theoretical explanations of temporal discrimination
Journal of Experimental Psychology: Animal Behavior Processes, 2009
The empirical goals were to describe the behavior of rats trained on multiple temporal discriminations and to use these descriptions to predict behavior observed under new training conditions. The theoretical goals were to fit a quantitative theory to behavior from one training condition, estimate parameters for the intervening perception, memory, and decision processes, and use these parameters to predict behavior observed under new conditions. Twenty-four rats were trained on a multiple peak-interval procedure with two stimuli that were presented individually (Stimulus A and B), or in compound (Compound AB); either different responses (Experiment 1) or the same response (Experiment 2) were reinforced during the presentations of Stimulus A, Stimulus B, and Compound AB. The patterns of correct and stimulus-error responses during Stimulus A and Stimulus B (Experiment 1) were used as elements that, with summation rules, predicted behavior under new conditions (Compound AB, Experiment 1 and Stimulus A, Stimulus B, and Compound AB, Experiment 2). A comparison of the success of the empirical and theoretical goals supported the use of a quantitative theory of behavior to explain the data. 3 The empirical goal was to describe the behavior of rats trained on multiple temporal discriminations (Experiment 1) and to use this description to predict behavior observed under new training conditions (Experiments 1 and 2). The theoretical goal was to fit a quantitative theory to behavior from one training condition, estimate parameters for the intervening perception, memory, and decision processes, and use these parameters to predict behavior observed under new conditions. A comparison of the success of the empirical and theoretical goals is provided in the General Discussion. During one widely used temporal discrimination procedure, the peak procedure (Catania, 1970; Roberts, 1981), rats are presented with food following the first response after a fixed interval since the onset of a stimulus on some occasions; on others, no food is delivered and the stimulus remains on for a duration that is longer than the fixed interval. Rats may be trained on different intervals (e.g., Church, Meck, & Gibbon, 1994); or the same rats may be trained on multiple intervals that are signaled by different stimuli (e.g., Roberts, 1981; Yi, 2007). The standard results during the nonfood stimulus presentations are that response rate increases as a function of time, reaches a maximum at approximately the usual time of reinforcement, and then slowly decreases asymmetrically. When performance on multiple intervals in the nonfood cycles in the peak procedure are compared, it is often observed that (1) the time at which response rate is at its maximum (peak time) is linearly related to the fixed interval (proportionality result); (2) the spread of the response rate gradient is also linearly related to the fixed interval (scalar property result); (3) since there is a linear relationship between the peak time and the spread of the response rate gradients and the fixed interval, the coefficient of variation, defined as the spread (standard deviation) divided by the peak time (mean) is constant (Weber's law for timing result); and (4) to Brown University. This research was included in the thesis of Paulo Guilhardi submitted to the Department of Psychology at Brown University as partial fulfillment of the requirements for the doctoral degree at Brown University, on May 2005 (Guilhardi, 2005). The author would like to thank his advisor Russell M. Church and members of his committee Donald S. Blough, Julius W. Kling, and Rebecca A. Burwell for their guidance.
Waiting in pigeons: the effects of daily intercalation on temporal discrimination
1992
Pigeons trained on cyclic-interval schedules adjust their postfood pause from interval to interval within each experimental session. But on regular fixed-interval schedules, many sessions at a given parameter value are usually necessary before the typical fixed-interval "scallop" appears. In the first case, temporal control appears to act from one interfood interval to the next; in the second, it appears to act over hundreds of interfood intervals. The present experiments look at the intermediate case: daily variation in schedule parameters. In Experiments 1 and 2 we show that pauses proportional to interfood interval develop on short-valued response-initiated-delay schedules when parameters are changed daily, that additional experience under this regimen leads to little further improvement, and that pauses usually change as soon as the schedule parameter is changed. Experiment 3 demonstrates identical waiting behavior on fixed-interval and response-initiated-delay schedules when the food delays are short (<20 s) and conditions are changed daily. In Experiment 4 we show that daily intercalation prevents temporal control when interfood intervals are longer (25 to 60 s). The results of Experiment 5 suggest that downshifts in interfood interval produce more rapid waiting-time adjustments than upshifts. These and other results suggest that the effects of short interfood intervals seem to be more persistent than those of long intervals. Key words: linear waiting, timing, fixed-interval schedules, response-initiated delay schedules, key peck, pigeons One of the most reliable aspects of performance on any reinforcement schedule is the postreinforcement pausing observed when reinforcers are delivered at regular time intervals. Independent of any response-reinforcer contingency, birds and mammals (including humans, under some conditions) learn to postpone food-related responses after each food delivery for a time proportional to the typical interfood interval (temporal control: Chung &
Discrimination of variable schedules is controlled by interresponse times proximal to reinforcement
Journal of the experimental analysis of behavior, 2012
In Experiment 1, food-deprived rats responded to one of two schedules that were, with equal probability, associated with a sample lever. One schedule was always variable ratio, while the other schedule, depending on the trial within a session, was: (a) a variable-interval schedule; (b) a tandem variable-interval, differential-reinforcement-of-low-rate schedule; or (c) a tandem variable-interval, differential-reinforcement-of-high-rate schedule. Completion of a sample-lever schedule, which took approximately the same time regardless of schedule, presented two comparison levers, one associated with each sample-lever schedule. Pressing the comparison lever associated with the schedule just presented produced food, while pressing the other produced a blackout. Conditional-discrimination accuracy was related to the size of the difference in reinforced interresponse times and those that preceded it (predecessor interresponse times) between the variable-ratio and other comparison schedules...
Rapid timing of a single transition in interfood interval duration by rats
Animal Learning & Behavior, 1997
The present experiment examined temporal control of wait-time responses by interfood interval (IFI) duration. We exposed rats to a sequence of intervals that changed in duration at an unpredictable point within a session. In Phase 1, intervals changed from 15 to 5 sec (step-down) or from 15 to 45 sec (step-up). In Phase 2, we increased the intervals by a factor of four. We observed rapid timing effects during a transition in both phases of the experiment: A step-down and a step-up transition significantly decreased and increased wait time in the next interval, respectively. Furthermore, adjustment of wait times during step-down was largely complete by the third transition IFl. In contrast, wait times gradually increased across several transition IFls during step-up. The results reveal dynamic properties of temporal control that depend on the direction in which IFIs change. Organization ofbehavior by the time between food presentations has been demonstrated in a variety of animals ranging from rats and pigeons (see, e.g., Richelle & Lejeune, 1980) to captive starlings (e.g., Brunner, Kacelnik, & Gibbon, 1992) to fish and turtles (Lejeune & Wearden, 1991). For example, animals given extended exposure to fixed-interval (FI) reinforcement schedules come under the control of the time between reinforcer deliveries (interfood interval, IFI). A hallmark of responding during FI schedules is a postreinforcement wait time that is proportional to the IFI duration (Lowe & Harzem, 1977; Shull, 1970; Zeiler & Powell, 1994). FI schedules and other timing procedures (e.g., the peak procedure; Catania, 1970; Roberts, 1981) are usually studied for the steady-state behavior they generate. Many quantitative properties have been discovered (e.g., scalar timing; Gibbon, 1977) that have been useful in testing and developing models of timing. Leading models in this area are scalar expectancy theory (SET; Church, 1984; Gibbon, 1977; Gibbon & Church, 1984) and the behavioral theory of timing (BeT; Killeen & Fetterman, 1988). Both are essentially molar models. SET's assumption about memory for time intervals, for example, is based on statistical distributions derived from molar features of a pacemaker system and reinforcement schedule (e.g., Gibbon, 1991, 1995; Gibbon & Church, 1984). BeT, too, is based on molar properties. According to BeT, adjunctive responses mediate time discrimination, and these responses are assumed to be associated with transitions be
2004
This study assessed the effect of different signal conditions on response acquisition by naive rats. It also compared response acquisition on schedules that produce different response rate requirements. Thirty-six naive rats were exposed to a repetitive time cycle of 32-s. The cycle was divided into two portions t and t delta. A response during t produced reinforcement at the end of the cycle; responses that occurred during t delta had no programmed consequences. For some experimental groups t was signaled by a response–produced signal; in other groups a non-contingent signal occurred during t; in still other experimental groups t was unsignaled. t placement was varied in order to produce two different response-reinforcer temporal relations; t duration was also varied in order to produce two different response rate requirements. Results showed response rates decreased when a non-contingent signal occurred during t. Local response rates suggest that low response rates observed under ...
Single-Sample Discrimination of Different Schedules' Reinforced Interresponse Times
Journal of the Experimental Analysis of Behavior, 2009
Food-deprived rats in Experiment 1 responded to one of two tandem schedules that were, with equal probability, associated with a sample lever. The tandem schedules' initial links were different randominterval schedules. Their values were adjusted to approximate equality in time to completing each tandem schedule's response requirements. The tandem schedules differed in their terminal links: One reinforced short interresponse times; the other reinforced long ones. Tandem-schedule completion presented two comparison levers, one of which was associated with each tandem schedule. Pressing the lever associated with the sample-lever tandem schedule produced a food pellet. Pressing the other produced a blackout. The difference between terminal-link reinforced interresponse times varied across 10-trial blocks within a session. Conditional-discrimination accuracy increased with the size of the temporal difference between terminal-link reinforced interresponse times. In Experiment 2, one tandem schedule was replaced by a random ratio, while the comparison schedule was either a tandem schedule that only reinforced long interresponse times or a random-interval schedule. Again, conditional-discrimination accuracy increased with the temporal difference between the two schedules' reinforced interresponse times. Most rats mastered the discrimination between random ratio and random interval, showing that the interresponse times reinforced by these schedules can serve to discriminate between these schedules.
Journal of the Experimental Analysis of Behavior, 1967
Pigeons were exposed to a cyclic schedule in which each cycle was composed of twelve 1-min fixed intervals followed by four 3-min fixed intervals; four such cycles comprised an experimental session. The pigeons responded at a much higher average rate during the 3-min intervals than during the 1-min intervals. Other effects were a depression of responding during the first short interval of each cycle and a shortening of postreinforcement pause during the second short interval. The main effect is attributable to a relatively fixed pattern of responding after reinforcement; this pattern consisted in a pause of approximately constant duration followed by responding at an approximately constant rate until the next reinforcement, resulting in much higher average response rates during the longer interreinforcement intervals. The other effects seem attributable to relatively slight differences between the pattern of responding characteristic of later long intervals and the pattern during later short intervals of each cycle. A major implication is that the pigeon is largely insensitive to the sequential properties of many interval-reinforcement schedules. A description of intervalschedule "frustration" phenomena in terms of the inhibitory effects of reinforcement is discussed in relation to these results.
Behavioural Processes, 2000
We attempted to demonstrate that timing performance on a temporal discrimination would be enhanced if rats were required to fill a duration with behavior than when they were not required to respond. Six rats were trained to discriminate between a 3- and 9-s stimulus in a symbolic-matching-to-sample task. In two conditions, a tone was used to signal the sample, and in the other two conditions, a light was used to signal the sample. In two conditions, the rats were required to respond on a lever mounted on the rear wall of the experimental chamber before making their discriminative response to one of the two levers mounted on the front wall of the experimental chamber. In the other two conditions, the rear lever was not presented during sample presentation, and no response was required. Consistent with our predictions, timing performance was significantly better when a lever-response was required during sample presentation than when no response was required.