Fix and Sample with Rats in the Dynamics of Choice (original) (raw)
Related papers
Choice and number of reinforcers
Journal of The Experimental Analysis of Behavior, 1979
Pigeons were exposed to the concurrent-chains procedure in two experiments designed to investigate the effects of unequal numbers of reinforcers on choice. In Experiment 1, the pigeons were indifferent between long and short durations of access to variable-interval schedules of equal reinforcement density, but preferred a short high-density terminal link over a longer, lower density terminal link, even though in both sets of comparisons there were many more reinforcers per cycle in the longer terminal link. In Experiment 2, the pigeons preferred five reinforcers, the first of which was available after 30 sec, over a single reinforcer available at 30 sec, but only when the local interval between successive reinforcers was short. The pigeons were indifferent when this local interval was sufficiently long. The pigeons' behavior appeared to be under the control of local terminal-link variables, such as the intervals to the first reinforcer and between successive reinforcers, and was not well described in terms of transformed delays of reinforcement or reductions in average delay to reinforcement.
Short-term and long-term effects of reinforcers on choice
Journal of the Experimental Analysis of Behavior, 1993
The relation between molar and molecular aspects of time allocation was studied in pigeons on concurrent variable-time variable-time schedules of reinforcement. Fifteen-minute reinforcer-free periods were inserted in the middle of every third session. Generalized molar matching of time ratios to reinforcer ratios was observed during concurrent reinforcement. Contrary to melioration theory, preference was unchanged during the reinforcer-free periods as well as in extinction. In addition to this long-term effect of reinforcement, short-term effects were observed: Reinforcers increased the duration of the stays during which they were delivered but had little consistent effect either on the immediately following stay in the same schedule or on the immediately following stay in the alternative schedule. Thus, an orderly effect of reinforcer delivery on molecular aspects of time allocation was observed, but because of its short-term nature, this effect cannot account for the matching observed at the molar level.
Choice and the Initial Delay to a Reinforcer
The Psychological Record, 2008
Pigeons were trained in two experiments that used the concurrent-chains procedure. These experiments sought to identify the variables controlling the preference of pigeons for a constant duration over a variable duration of exposure to an aperiodic, time-based, terminal-link schedule. The results indicated that two variables correlated with the constant-duration terminal link combined to control preference: (a) a shorter initial delay to a reinforcer; and (b) the probabilistic occurrence of multiple reinforcers. Grace and Nevin (2000) trained pigeons on a concurrent-chains procedure with equal variable-interval (VI) schedules in the initial links and equal VI schedules in the terminal links. The terminal links differed in that one ended after a single reinforcer, which they called "variable-duration" terminal link, whereas the other ended after a fixed period of exposure equal to the average interreinforcement interval (IRI) of the schedule, which they called "constantduration" terminal link. As Grace and Nevin identified, and as discussed at some length below, an important feature of the constant-duration terminal link is that it probabilistically yielded 0, 1, or multiple reinforcers per entry, although it provided the same average rate of reinforcement overall as the variable-duration terminal link. Grace and Nevin (2000) found that three of four pigeons clearly preferred the constant-duration terminal link. In their words, the data of a fourth pigeon "demonstrated a consistent right-key bias" (p. 178), and the present conclusion is that its data are more difficult to interpret. In any case, an important question is what variables caused the preference. Ordinarily, one would have expected the pigeons to be indifferent, since the schedules in effect during the alternatives were identical, and each alternative yielded the same overall rate of reinforcement. Grace and Nevin (2000) initially pondered the role of multiple reinforcers in the constant-duration terminal link, because research has shown that subjects may well prefer a choice alternative associated with multiple reinforcers rather than a single reinforcer per terminal-link entry (e.g.,
Choice and multiple reinforcers
Journal of The Experimental Analysis of Behavior, 1982
Pigeons chose between equivalent two-component mixed and multiple terminal-link schedules of reinforcement in the concurrent-chains procedure. The pigeons preferred the multiple schedule over the mixed when the components of the compound schedules were differentiated in terms of density of reinforcement, but the pigeons were indifferent when the components were differentiated in terms of number of reinforcers per cycle. Taken together, these results indicate that a local variable, the interval to the first reinforcer, but not a molar variable, the number of reinforcers, was sufficient to differentiate the components and thereby evoke preference.
Every reinforcer counts: reinforcer magnitude and local preference
Journal of The Experimental Analysis of Behavior, 2003
Six pigeons were trained on concurrent variable-interval schedules. Sessions consisted of seven components, each lasting 10 reinforcers, with the conditions of reinforcement differing between components. The component sequence was randomly selected without replacement. In Experiment 1, the concurrent-schedule reinforcer ratios in components were all equal to 1.0, but across components reinforcer-magnitude ratios varied from 1:7 through 7:1. Three different overall reinforcer rates were arranged across conditions. In Experiment 2, the reinforcer-rate ratios varied across components from 27:1 to 1:27, and the reinforcer-magnitude ratios for each alternative were changed across conditions from 1:7 to 7:1. The results of Experiment 1 replicated the results for changing reinforcer-rate ratios across components reported by Baum (2000, 2002): Sensitivity to reinforcer-magnitude ratios increased with increasing numbers of reinforcers in components. Sensitivity to magnitude ratio, however, fell short of sensitivity to reinforcer-rate ratio. The degree of carryover from component to component depended on the reinforcer rate. Larger reinforcers produced larger and longer postreinforcer preference pulses than did smaller reinforcers. Similar results were found in Experiment 2, except that sensitivity to reinforcer magnitude was considerably higher and was greater for magnitudes that differed more from one another. Visit durations following reinforcers measured either as number of responses emitted or time spent responding before a changeover were longer following larger than following smaller reinforcers, and were longer following sequences of same reinforcers than following other sequences. The results add to the growing body of research that informs model building at local levels.
On the joint control of preference by time and reinforcer-ratio variation
Behavioural Processes, 2013
Five pigeons were trained in a procedure in which, with a specified probability, food was either available on a fixed-interval schedule on the left key, or on a variable-interval schedule on the right key. In Phase 1, we arranged, with a probability of 0.5, either a left-key fixed-interval schedule or a rightkey variable-interval 30 s, and varied the value of the fixed-interval schedule from 5 s to 50 s across 5 conditions. In Phase 2, we arranged either a left-key fixed-interval 20-s schedule or a right-key variableinterval 30-s schedule, and varied the probability of the fixed-interval schedule from 0.05 to 1.0 across 8 conditions. Phase 3 always arranged a fixed-interval schedule on the left key, and its value was varied over the same range as in Phase 1. In Phase 1, overall preference was generally toward the variableinterval schedule, preference following reinforcers was initially toward the variable-interval schedule, and maximum preference for the fixed-interval schedule generally occurred close to the arranged fixedinterval time, becoming relatively constant thereafter. In Phase 2, overall left-key preference followed the probability of the fixed-interval schedule, and maximum fixed-interval choice again occurred close to the fixed-interval time, except when the fixed-interval probability was 0.1 or less. The pattern of choice following reinforcers was similar to that in Phase 1, but the peak fixed-interval choice became more peaked with higher probabilities of the fixed interval. Phase 3 produced typical fixed-interval schedule responding. The results are discussed in terms of reinforcement effects, timing in the context of alternative reinforcers, and generalized matching. These results can be described by a quantitative model in which reinforcer rates obtained at times since the last reinforcer are distributed across time according to a Gaussian distribution with constant coefficient of variation before the fixed-interval schedule time, changing to extended choice controlled by extended reinforcer ratios beyond the fixed-interval time. The same model provides a good description of response rates on single fixed-interval schedules.
Molar versus local reinforcement probability as determinants of stimulus value
Journal of the Experimental Analysis of Behavior, 1993
During one component of a multiple schedule, pigeons were trained on a discrete-trial concurrent variable-interval variable-interval schedule in which one alternative had a high scheduled rate of reinforcement and the other a low scheduled rate of reinforcement. When the choice proportion between the alternatives matched their respective relative reinforcement frequencies, the obtained probabilities of reinforcement (reinforcer per peck) were approximately equal. In alternate components of the multiple schedule, a single response alternative was presented with an intermediate scheduled rate of reinforcement. During probe trials, each alternative of the concurrent schedule was paired with the constant alternative. The stimulus correlated with the high reinforcement rate was preferred over that with the intermediate rate, whereas the stimulus correlated with the intermediate rate of reinforcement was preferred over that correlated with the low rate of reinforcement. Preference on probe tests was thus determined by the scheduled rate of reinforcement. Other subjects were presented all three alternatives individually, but with a distribution of trial frequency and reinforcement probability similar to that produced by the choice patterns of the original subjects. Here, preferences on probe tests were determined by the obtained probabilities of reinforcement. Comparison of the two sets of results indicates that the availability of a choice alternative, even when not responded to, affects the preference for that alternative. The results imply that models of choice that invoke only obtained probability of reinforcement as the controlling variable (e.g., melioration) are inadequate.