Concurrent Schedules: Short-and Long-Term Effects of Reinforcers. (original) (raw)
Related papers
Preference for fixed-interval schedules of reinforcement
1970
Abstract Pigeons were trained on a two-link concurrent chain schedule in which responding on either of two keys in the initial link occasionally produced a terminal link, signaled by a change in the color of that key and a darkening of the other. Further responding on the lighted key was reinforced with food according to a fixed-interval schedule. For one of the keys, this fixed interval was always 20 sec, while for the other it was held at values of 5, 14, 30, or 60 sec for several weeks.
Rapid acquisition of preference in concurrent schedules: Effects of reinforcement amount
Behavioural Processes, 2007
Pigeons responded in a concurrent-chains procedure in which terminal-link reinforcer variables were changed unpredictably across sessions. In Experiment 1, the terminal-link schedules were fixed-interval (FI) 8 s and FI 16 s, and the reinforcer magnitudes were 2 s and 4 s. In Experiment 2 the probability of reinforcement (100% or 50%) was varied with immediacy and magnitude. Multiple-regression analyses showed that pigeons' initial-link response allocation was determined by current-session reinforcer variables, similar to previous studies which have varied only immediacy . Sensitivity coefficients were positive and statistically significant for all reinforcer variables in both experiments. Analyses of responding within individual sessions showed that final levels of preference for dominated sessions, in which all reinforcer variables favored the same terminal link, were more extreme than for tradeoff sessions in which at least one reinforcer variable favored each alternative. This result implies that response allocation was determined by multiple reinforcer variables within individual sessions, consistent with the concatenated matching law. However, in Experiment 2, there was a nonlinear (sigmoidal) relationship between response allocation and relative value, which suggests the possibility that reinforcer variables may interact during acquisition, contrary to the matching law.
The Response‐Reinforcement Dependency in Fixed‐Interval Schedules of REINFORCEMENT1
Journal of the Experimental Analysis of Behavior, 1970
Pigeons were exposed to four different schedules of food reinforcement that arranged a fixed minimum time interval between reinforcements (60 sec or 300 sec). The first was a standard fixed‐interval schedule. The second was a schedule in which food was presented automatically at the end of the fixed time interval as long as a response had occurred earlier. The third and fourth schedules were identical to the first two except that the first response after reinforcement changed the color on the key. When the schedule required a peck after the interval elapsed, the response pattern consisted of a pause after reinforcement followed by responding at a high rate until reinforcement. When a response was not required after the termination of the interval, the pattern consisted of a pause after reinforcement, followed by responses and then by a subsequent pause until reinforcement. Having the first response after reinforcement change the color on the key had little effect on performance. Pos...
Short-term and long-term effects of reinforcers on choice
Journal of the Experimental Analysis of Behavior, 1993
The relation between molar and molecular aspects of time allocation was studied in pigeons on concurrent variable-time variable-time schedules of reinforcement. Fifteen-minute reinforcer-free periods were inserted in the middle of every third session. Generalized molar matching of time ratios to reinforcer ratios was observed during concurrent reinforcement. Contrary to melioration theory, preference was unchanged during the reinforcer-free periods as well as in extinction. In addition to this long-term effect of reinforcement, short-term effects were observed: Reinforcers increased the duration of the stays during which they were delivered but had little consistent effect either on the immediately following stay in the same schedule or on the immediately following stay in the alternative schedule. Thus, an orderly effect of reinforcer delivery on molecular aspects of time allocation was observed, but because of its short-term nature, this effect cannot account for the matching observed at the molar level.
Choice in a variable environment: every reinforcer counts
Journal of The Experimental Analysis of Behavior, 2000
Six pigeons were trained in sessions composed of seven components, each arranged with a different concurrent-schedule reinforcer ratio. These components occurred in an irregular order with equal frequency, separated by 10-s blackouts. No signals differentiated the different reinforcer ratios. Con- ditions lasted 50 sessions, and data were collected from the last 35 sessions. In Part 1, the arranged overall reinforcer
Choice and the Initial Delay to a Reinforcer
The Psychological Record, 2008
Pigeons were trained in two experiments that used the concurrent-chains procedure. These experiments sought to identify the variables controlling the preference of pigeons for a constant duration over a variable duration of exposure to an aperiodic, time-based, terminal-link schedule. The results indicated that two variables correlated with the constant-duration terminal link combined to control preference: (a) a shorter initial delay to a reinforcer; and (b) the probabilistic occurrence of multiple reinforcers. Grace and Nevin (2000) trained pigeons on a concurrent-chains procedure with equal variable-interval (VI) schedules in the initial links and equal VI schedules in the terminal links. The terminal links differed in that one ended after a single reinforcer, which they called "variable-duration" terminal link, whereas the other ended after a fixed period of exposure equal to the average interreinforcement interval (IRI) of the schedule, which they called "constantduration" terminal link. As Grace and Nevin identified, and as discussed at some length below, an important feature of the constant-duration terminal link is that it probabilistically yielded 0, 1, or multiple reinforcers per entry, although it provided the same average rate of reinforcement overall as the variable-duration terminal link. Grace and Nevin (2000) found that three of four pigeons clearly preferred the constant-duration terminal link. In their words, the data of a fourth pigeon "demonstrated a consistent right-key bias" (p. 178), and the present conclusion is that its data are more difficult to interpret. In any case, an important question is what variables caused the preference. Ordinarily, one would have expected the pigeons to be indifferent, since the schedules in effect during the alternatives were identical, and each alternative yielded the same overall rate of reinforcement. Grace and Nevin (2000) initially pondered the role of multiple reinforcers in the constant-duration terminal link, because research has shown that subjects may well prefer a choice alternative associated with multiple reinforcers rather than a single reinforcer per terminal-link entry (e.g.,
Journal of the Experimental Analysis of Behavior, 2013
Six pigeons worked on concurrent exponential variable-interval schedules in which the relative frequency of food deliveries for responding on the two alternatives reversed at a fixed time after each food delivery. Across conditions, the point of food-ratio reversal was varied from 10 s to 30 s, and the overall reinforcer rate was varied from 1.33 to 4 per minute. The effect of rate of food delivery and food-ratio-reversal time on choice and response rates was small. In all conditions, postfood choice was toward the locally richer key, regardless of the last-food location. Unlike the local food ratio which changed in a stepwise fashion, local choice changed according to a decelerating monotonic function, becoming substantially less extreme than the local food ratio soon after food delivery. This deviation in choice appeared to result from the birds' inaccurate discrimination of the time of food deliveries; local choice was described well by a model that assumed that log response ratios matched food ratios that were redistributed across surrounding time bins with mean time t and a constant coefficient of variation. We suggest that local choice is controlled by the likely availability of food in time, and that choice matches the discriminated log of the ratio of food rates across time since the last food delivery.