Relative Reinforcer Rates and Magnitudes Do Not Control Concurrent Choice Independently (original) (raw)

On the joint control of preference by time and reinforcer-ratio variation

Behavioural Processes, 2013

Five pigeons were trained in a procedure in which, with a specified probability, food was either available on a fixed-interval schedule on the left key, or on a variable-interval schedule on the right key. In Phase 1, we arranged, with a probability of 0.5, either a left-key fixed-interval schedule or a rightkey variable-interval 30 s, and varied the value of the fixed-interval schedule from 5 s to 50 s across 5 conditions. In Phase 2, we arranged either a left-key fixed-interval 20-s schedule or a right-key variableinterval 30-s schedule, and varied the probability of the fixed-interval schedule from 0.05 to 1.0 across 8 conditions. Phase 3 always arranged a fixed-interval schedule on the left key, and its value was varied over the same range as in Phase 1. In Phase 1, overall preference was generally toward the variableinterval schedule, preference following reinforcers was initially toward the variable-interval schedule, and maximum preference for the fixed-interval schedule generally occurred close to the arranged fixedinterval time, becoming relatively constant thereafter. In Phase 2, overall left-key preference followed the probability of the fixed-interval schedule, and maximum fixed-interval choice again occurred close to the fixed-interval time, except when the fixed-interval probability was 0.1 or less. The pattern of choice following reinforcers was similar to that in Phase 1, but the peak fixed-interval choice became more peaked with higher probabilities of the fixed interval. Phase 3 produced typical fixed-interval schedule responding. The results are discussed in terms of reinforcement effects, timing in the context of alternative reinforcers, and generalized matching. These results can be described by a quantitative model in which reinforcer rates obtained at times since the last reinforcer are distributed across time according to a Gaussian distribution with constant coefficient of variation before the fixed-interval schedule time, changing to extended choice controlled by extended reinforcer ratios beyond the fixed-interval time. The same model provides a good description of response rates on single fixed-interval schedules.

Effect of Relative Reinforcement Duration in Concurrent Schedules with Different Reinforcement Densities: A Replication of Davison (1988)

2018

Previous studies have challenged the prediction of the Generalized Matching Law about the effect of relative, but not absolute, value of reinforcement parameters on relative choice measures. Six pigeons were run in an experiment involving concurrent variable-interval schedules with unequal reinforcer durations associated with the response alternatives (10 s versus 3s), a systematic replication of Davison (1988). Programmed reinforcement frequency was kept equal for the competing responses while their absolute value was varied. Measures of both response ratios and time ratios showed preference for the larger duration alternative and that preference did not change systematically with changes in absolute reinforcer frequency. Present results support the relativity assumption of the Matching Law. It is suggested that Davison’s results were due to uncontrolled variations in obtained reinforcement frequency. Keywords : choice, preference, overall reinforcer frequency, reinforcer magnitude...

Multiple determinants of the effects of reinforcement magnitude on free-operant response rates

Journal of The Experimental Analysis of Behavior, 1991

Four experiments examined the effects of increasing the number of food pellets given to hungry rats for a lever-press response. On a simple variable-interval 60-s schedule, increased number of pellets depressed response rates (Experiment 1). In Experiment 2, the decrease in response rate as a function of increased reinforcement magnitude was demonstrated on a variable-interval 30-s schedule, but enhanced rates of response were obtained with the same increase in reinforcement magnitude on a variable-ratio 30 schedule. In Experiment 3, higher rates of responding were maintained by the component of a concurrent variable-interval 60-s variable-interval 60-s schedule associated with a higher reinforcement magnitude. In Experiment 4, higher rates of response were produced in the component of a multiple variable-interval 60-s variable-interval 60-s schedule associated with the higher reinforcement magnitude. It is suggested that on simple schedules greater reinforcer magnitudes shape the reinforced pattern of responding more effectively than do smaller reinforcement magnitudes. This effect is, however, overridden by another process, such a contrast, when two magnitudes are presented within a single session on two-component schedules.

Choice and number of reinforcers

Journal of The Experimental Analysis of Behavior, 1979

Pigeons were exposed to the concurrent-chains procedure in two experiments designed to investigate the effects of unequal numbers of reinforcers on choice. In Experiment 1, the pigeons were indifferent between long and short durations of access to variable-interval schedules of equal reinforcement density, but preferred a short high-density terminal link over a longer, lower density terminal link, even though in both sets of comparisons there were many more reinforcers per cycle in the longer terminal link. In Experiment 2, the pigeons preferred five reinforcers, the first of which was available after 30 sec, over a single reinforcer available at 30 sec, but only when the local interval between successive reinforcers was short. The pigeons were indifferent when this local interval was sufficiently long. The pigeons' behavior appeared to be under the control of local terminal-link variables, such as the intervals to the first reinforcer and between successive reinforcers, and was not well described in terms of transformed delays of reinforcement or reductions in average delay to reinforcement.

Short-term and long-term effects of reinforcers on choice

Journal of the Experimental Analysis of Behavior, 1993

The relation between molar and molecular aspects of time allocation was studied in pigeons on concurrent variable-time variable-time schedules of reinforcement. Fifteen-minute reinforcer-free periods were inserted in the middle of every third session. Generalized molar matching of time ratios to reinforcer ratios was observed during concurrent reinforcement. Contrary to melioration theory, preference was unchanged during the reinforcer-free periods as well as in extinction. In addition to this long-term effect of reinforcement, short-term effects were observed: Reinforcers increased the duration of the stays during which they were delivered but had little consistent effect either on the immediately following stay in the same schedule or on the immediately following stay in the alternative schedule. Thus, an orderly effect of reinforcer delivery on molecular aspects of time allocation was observed, but because of its short-term nature, this effect cannot account for the matching observed at the molar level.

Testing the linearity and independence assumptions of the generalized matching law for reinforcer magnitude: A residual meta-analysis

Behavioral Processes, 2011

We conducted a residual meta-analysis (Sutton, Grace, McLean & Baum, 2008) to test the assumptions of the generalized matching law that effects of relative reinforcer magnitude on response allocation in concurrent schedules can be described as a power function of the magnitude ratio and are independent from the effects of relative reinforcer rate. We identified five studies which varied relative reinforcer magnitude over at least four levels and six studies in which relative reinforcer magnitude and rate were varied factorially. The generalized matching law provided a reasonably good description of the data, accounting for 77.1% and 90.1% of the variance in the two sets of studies. Results of polynomial regressions showed that there were no systematic patterns in pooled residuals as a function of predicted response ratios for data sets in which relative magnitude was varied. For data sets in which relative rate and magnitude were varied factorially, there was a significant negative cubic pattern in the pooled residuals, suggesting that obtained response allocation was less extreme than predicted for conditions with extreme predicted values. However, subsequent analyses showed that this result was associated with results from conditions in one study (Elliffe, Davison & Landon, 2008) in which the product of the rate and magnitude ratios was 63:1 and in which response allocation may not have fully stabilized. When data from these conditions was omitted, there were no significant components in the residuals. Although the number of available studies was limited, results provide support for the assumptions of the generalized matching law that effects of relative reinforcer magnitude on choice can be described by a power function and are independent from relative reinforcer rate.

Tests of Behavioral-Economic Assessments of Relative Reinforcer Efficacy II: Economic Complements

Journal of the Experimental Analysis of Behavior, 2007

This experiment was conducted to test the predictions of two behavioral-economic approaches to quantifying relative reinforcer efficacy. The normalized demand analysis suggests that characteristics of averaged normalized demand curves may be used to predict progressive-ratio breakpoints and peak responding. By contrast, the demand analysis holds that traditional measures of relative reinforcer efficacy (breakpoint, peak response rate, and choice) correspond to specific characteristics of nonnormalized demand curves. The accuracy of these predictions was evaluated in rats' responding for food or water: two reinforcers known to function as complements. Consistent with the first approach, predicted peak normalized response output values obtained under single-schedule conditions ordinally predicted progressive-ratio breakpoints and peak response rates obtained in a separate condition. Combining the minimum-needs hypothesis with the normalized demand analysis helped to interpret prior findings, but was less useful in predicting choice between food and water-two strongly complementary reinforcers. Predictions of the demand analysis had mixed success. Peak response outputs predicted from the non-normalized water demand curves were significantly correlated with obtained peak responding for water in a separate condition, but none of the remaining three predicted correlations was statistically significant. The demand analysis fared better in predicting choice-relative consumption of food and water under single schedules of reinforcement predicted preference under concurrent schedules significantly better than chance.