Concurrent-Schedule Performance: Effects of Relative and Overall Reinforcer Rate (original) (raw)

The effects of changeover delays of fixed or variable duration on concurrent variable-interval performance in pigeons

Animal Learning & Behavior, 1981

The effects of changeover delays of fixed or variable duration on concurrent variable-interval performance in pigeons were investigated in a series of three experiments. Experiment 1 compared the effects of a fixed, variable, or variable signaled changeover delay on interchangeover times and responding during and after the changeover delay. The duration of the changeover delays was systematically varied in Experiment 2, and the relative reinforcement frequencies were manipulated in Experiment 3. Interchangeover times were found to be shorter when changeover delays of variable duration were compared with those of fixed duration. Changeover delays of fixed duration produced higher response rates during the changeover delay than after the changeover delay had elapsed; changeover delays of variable duration produced such differences to a lesser extent. It was concluded that the changeover delay in concurrent variableinterval schedules of reinforcement functionally acts as a delay period to the next opportunity for reinforcement, possibly serving as a conditioned reinforcer for the behavior preceding it (the interchangeover time) and as a discriminative stimulus for the behavior in its presence (response rates during the delay).

Sensitivity of time allocation to an overall reinforcer rate feedback function in concurrent interval schedules

Journal of the Experimental Analysis of Behavior, 1989

Six pigeons were trained on concurrent variable-interval schedules in which feedback functions arranged that the overall reinforcer rate either (a) was independent of preference, (b) decreased with increasing absolute preference, or (c) increased with increasing absolute preference. In Experiment 1, the reinforcer rate in an interreinforcement interval was determined by the absolute time-allocation ratio in the previous interval. When arranged reinforcer ratios were varied, there was no evidence of control over preference by overall reinforcer rate. In Experiment 2, the feedback function arranged that reinforcer rates were an inverse function of absolute preference, and window durations were fixed times. In Phase 1, using schedules that provided a four-to-one reinforcer ratio, the window duration was decreased from 20 s to 5 s over four conditions. Then, in Phases 2 and 3, the arranged reinforcer ratios were varied. In Phase 2, the reinforcer rate in the current 5-s time window was determined by preference in the previous 5-s window, and in Phase 3, the window durations were 20 s. Again, there was no indication of control by obtained overall reinforcer rate. These data call into question theories that suggest that the process underlying matching is one of maximizing overall reinforcer rates, or that preference in concurrent aperiodic schedules is controlled to any extent by overall reinforcer rate. They also question the notion that concurrent-schedule preference is controlled by molecular maximizing.

Effect of Relative Reinforcement Duration in Concurrent Schedules with Different Reinforcement Densities: A Replication of Davison (1988)

2018

Previous studies have challenged the prediction of the Generalized Matching Law about the effect of relative, but not absolute, value of reinforcement parameters on relative choice measures. Six pigeons were run in an experiment involving concurrent variable-interval schedules with unequal reinforcer durations associated with the response alternatives (10 s versus 3s), a systematic replication of Davison (1988). Programmed reinforcement frequency was kept equal for the competing responses while their absolute value was varied. Measures of both response ratios and time ratios showed preference for the larger duration alternative and that preference did not change systematically with changes in absolute reinforcer frequency. Present results support the relativity assumption of the Matching Law. It is suggested that Davison’s results were due to uncontrolled variations in obtained reinforcement frequency. Keywords : choice, preference, overall reinforcer frequency, reinforcer magnitude...

Rates of responding in the pigeon generated by simple and complex schedules which provide the same rates of reinforcement

Animal Learning & Behavior, 1976

Four pigeons pecked for food reinforcement on variable interval L-min schedules and on the variable-interval I-min components of multiple. concurrent. and pseudoconcurrent schedules. The pseudoconcurrent schedule provided only one schedule of reinforcement; but. any reinforcer could be collected by responding on either of two keys. The rate of responding generated by the variable interval schedule was not greater than the rates of responding generated by the components of the complex schedules. But. the rate of reinforcement obtained from the variable interval schedule was greater than the rates of reinforcement obtained from the components of the multiple schedule. These results may contradict the equation proposed by Herrnstein (19701. The equation predicts that the rate of responding generated by a schedule of reinforcement will be greater when the schedule appears alone. than when it appears as one component of a complex schedule.

Rapid acquisition of preference in concurrent schedules: Effects of reinforcement amount

Behavioural Processes, 2007

Pigeons responded in a concurrent-chains procedure in which terminal-link reinforcer variables were changed unpredictably across sessions. In Experiment 1, the terminal-link schedules were fixed-interval (FI) 8 s and FI 16 s, and the reinforcer magnitudes were 2 s and 4 s. In Experiment 2 the probability of reinforcement (100% or 50%) was varied with immediacy and magnitude. Multiple-regression analyses showed that pigeons' initial-link response allocation was determined by current-session reinforcer variables, similar to previous studies which have varied only immediacy . Sensitivity coefficients were positive and statistically significant for all reinforcer variables in both experiments. Analyses of responding within individual sessions showed that final levels of preference for dominated sessions, in which all reinforcer variables favored the same terminal link, were more extreme than for tradeoff sessions in which at least one reinforcer variable favored each alternative. This result implies that response allocation was determined by multiple reinforcer variables within individual sessions, consistent with the concatenated matching law. However, in Experiment 2, there was a nonlinear (sigmoidal) relationship between response allocation and relative value, which suggests the possibility that reinforcer variables may interact during acquisition, contrary to the matching law.

The Relation Between Response Rates and Reinforcement Rates in a Multiple SCHEDULE1

Journal of the Experimental Analysis of Behavior, 1968

In a multiple schedule, exteroceptive stimuli change when the reinforcement schedule is changed. Each performance in a multiple schedule may be considered concurrent with other behavior. Accordingly, two variable‐interval schedules of reinforcement were arranged in a multiple schedule, and a third, common variable‐interval schedule was programmed concurrently with each of the first two. A quantitative statement was derived that relates as a ratio the response rates for the first two (multiple) variable‐interval schedules. The value of the ratio depends on the rates of reinforcement provided by those schedules and the reinforcement rate provided by the common variable‐interval schedule. The following implications of the expression were evaluated in an experiment with pigeons: (a) if the reinforcement rates for the multiple variable‐interval schedules are equal, then the ratio of response rates is unity at all reinforcement rates of the common schedule; (b) if the reinforcement rates ...

The Response‐Reinforcement Dependency in Fixed‐Interval Schedules of REINFORCEMENT1

Journal of the Experimental Analysis of Behavior, 1970

Pigeons were exposed to four different schedules of food reinforcement that arranged a fixed minimum time interval between reinforcements (60 sec or 300 sec). The first was a standard fixed‐interval schedule. The second was a schedule in which food was presented automatically at the end of the fixed time interval as long as a response had occurred earlier. The third and fourth schedules were identical to the first two except that the first response after reinforcement changed the color on the key. When the schedule required a peck after the interval elapsed, the response pattern consisted of a pause after reinforcement followed by responding at a high rate until reinforcement. When a response was not required after the termination of the interval, the pattern consisted of a pause after reinforcement, followed by responses and then by a subsequent pause until reinforcement. Having the first response after reinforcement change the color on the key had little effect on performance. Pos...