Journal of the Experimental Analysis of Behavior Typical Delay Determines Waiting Time on Periodic-Food Schedules: Static and Dynamic Tests (original) (raw)

Typical delay determines waiting time on periodic-food schedules: Static and dynamic tests

Journal of the Experimental Analysis of Behavior, 1988

Pigeons and other animals soon learn to wait (pause) after food delivery on periodic-food schedules before resuming the food-rewarded response. Under most conditions the steady-state duration of the average waiting time, t, is a linear function of the typical interfood interval. We describe three experiments designed to explore the limits of this process. In all experiments, t was associated with one key color and the subsequent food delay, T, with another. In the first experiment, we compared the relation between t (waiting time) and T (food delay) under two conditions: when T was held constant, and when T was an inverse function of t. The pigeons could maximize the rate of food delivery under the first condition by setting t to a consistently short value; optimal behavior under the second condition required a linear relation with unit slope between t and T. Despite this difference in optimal policy, the pigeons in both cases showed the same linear relation, with slope less than one, between t and T. This result was confirmed in a second parametric experiment that added a third condition, in which T + t was held constant. Linear waiting appears to be an obligatory rule for pigeons. In a third experiment we arranged for a multiplicative relation between t and T (positive feedback), and produced either very short or very long waiting times as predicted by a quasi-dynamic model in which waiting time is strongly determined by the just-preceding food delay.

Dynamics of waiting in pigeons

Journal of the Experimental Analysis of Behavior, 1996

Two experiments used response-initiated delay schedules to test the idea that when food reinforcement is available at regular intervals, the time an animal waits before its first operant response (waiting time) is proportional to the immediately preceding interfood interval (linear waiting;. In Experiment 1 the interfood intervals varied from cycle to cycle according to one of four sinusoidal sequences with different amounts of added noise. Waiting times tracked the input cycle in a way which showed that they were affected by interfood intervals earlier than the immediately preceding one. In Experiment 2 different patterns of long and short interfood intervals were presented, and the results implied that waiting times are disproportionately influenced by the shortest of recent interfood intervals. A model based on this idea is shown to account for a wide range of results on the dynamics of timing behavior.

Pigeons' wait-time responses to transitions in interfood-interval duration: Another look at cyclic schedule performance

Journal of the Experimental Analysis of Behavior, 1993

Recent developments reveal that animals can rapidly learn about intervals of time. We studied the nature of this fast-acting process in two experiments. In Experiment 1 pigeons were exposed to a modified fixed-time schedule, in which the time between food rewards (interfood interval) changed at an unpredictable point in each session, either decreasing from 15 to 5 s (step-down) or increasing from 15 to 45 s (step-up). The birds were able to track under both conditions by producing postreinforcement wait times proportional to the preceding interfood-interval duration. However, the time course of responding differed: Tracking was apparently more gradual in the step-up condition. Experiment 2 studied the effect of having both kinds of transitions within the same session by exposing pigeons to a repeating (cyclic) sequence of the interfood-interval values used in Experiment 1. Pigeons detected changes in the input sequence of interfood intervals, but only for a few sessions-discrimination worsened with further training. The dynamic effects we observed do not support a linear waiting process of time discrimination, but instead point to a timing mechanism based on the frequency and recency of prior interfood intervals and not the preceding interfood interval alone.

Waiting in pigeons: the effects of daily intercalation on temporal discrimination

1992

Pigeons trained on cyclic-interval schedules adjust their postfood pause from interval to interval within each experimental session. But on regular fixed-interval schedules, many sessions at a given parameter value are usually necessary before the typical fixed-interval "scallop" appears. In the first case, temporal control appears to act from one interfood interval to the next; in the second, it appears to act over hundreds of interfood intervals. The present experiments look at the intermediate case: daily variation in schedule parameters. In Experiments 1 and 2 we show that pauses proportional to interfood interval develop on short-valued response-initiated-delay schedules when parameters are changed daily, that additional experience under this regimen leads to little further improvement, and that pauses usually change as soon as the schedule parameter is changed. Experiment 3 demonstrates identical waiting behavior on fixed-interval and response-initiated-delay schedules when the food delays are short (<20 s) and conditions are changed daily. In Experiment 4 we show that daily intercalation prevents temporal control when interfood intervals are longer (25 to 60 s). The results of Experiment 5 suggest that downshifts in interfood interval produce more rapid waiting-time adjustments than upshifts. These and other results suggest that the effects of short interfood intervals seem to be more persistent than those of long intervals. Key words: linear waiting, timing, fixed-interval schedules, response-initiated delay schedules, key peck, pigeons One of the most reliable aspects of performance on any reinforcement schedule is the postreinforcement pausing observed when reinforcers are delivered at regular time intervals. Independent of any response-reinforcer contingency, birds and mammals (including humans, under some conditions) learn to postpone food-related responses after each food delivery for a time proportional to the typical interfood interval (temporal control: Chung &

Determinants of pigeons' waiting time: Effects of interreinforcement interval and food delay

Journal of the Experimental Analysis of Behavior, 1990

Four pigeons performed on three types of schedules at short (i.e., 10, 30, or 60 s) interreinforcement intervals: (a) a delay-dependent schedule where interreinforcement interval was held constant (i.e., increases in waiting time decreased food delay), (b) an interreinforcement-interval-dependent schedule where food delay was held constant (i.e., increases in waiting time increased interreinforcement interval), and (c) a both-dependent schedule where increases in waiting time produced increases in interreinforcement interval but decreases in food delay. Waiting times were typically longer under the delay-dependent schedules than under the interreinforcement-interval-dependent schedules. Those under both-dependent schedules for 1 subject were intermediate between those under the other two schedule types, whereas for the other subjects waiting times under the both-dependent procedure were similar either to those under the delay-dependent schedule or to those under the interreinforcementinterval-dependent schedule, depending both on the subject and the interreinforcement interval. These results indicate that neither the interreinforcement interval nor food delay is the primary variable controlling waiting time, but rather that the two interact in a complex manner to determine waiting times.

Responding of pigeons under variable-interval schedules of signaled-delayed reinforcement: effects of delay-signal duration

Journal of the Experimental Analysis of Behavior, 1990

Two experiments with pigeons examined the relation of the duration of a signal for delay ("delay signal") to rates of key pecking. The first employed a multiple schedule comprised of two components with equal variable-interval 60-s schedules of 27-s delayed food reinforcement. In one component, a short (0.5-s) delay signal, presented immediately following the key peck that began the delay, was increased in duration across phases; in the second component the delay signal initially was equal to the length of the programmed delay (27 s) and was decreased across phases. Response rates prior to delays were an increasing function of delay-signal duration. As the delay signal was decreased in duration, response rates were generally higher than those obtained under identical delay-signal durations as the signal was increased in duration. In Experiment 2 a single variable-interval 60-s schedule of 27-s delayed reinforcement was used. Delay-signal durations were again increased gradually across phases. As in Experiment 1, response rates increased as the delay-signal duration was increased. Following the phase during which the signal lasted the entire delay, shorter delay-signal-duration conditions were introduced abruptly, rather than gradually as in Experiment 1, to determine whether the gradual shortening of the delay signal accounted for the differences observed in response rates under identical delay-signal conditions in Experiment 1. Response rates obtained during the second exposures to the conditions with shorter signals were higher than those observed under identical conditions as the signal duration was increased, as in Experiment 1. In both experiments, rates and patterns of responding during delays varied greatly across subjects and were not systematically related to delay-signal durations. The effects of the delay signal may be related to the signal's role as a discriminative stimulus for adventitiously reinforced intradelay behavior, or the delay signal may have served as a conditioned reinforcer by virtue of the temporal relation between it and presentation of food.

Additional-delay schedules: A continuum of temporal contingencies by varying food delay

Journal of the Experimental Analysis of Behavior, 1990

Pigeons performed on discrete-trial, temporally defined schedules in which the food delay (D) was adjusted according to the latency of the key peck (X) and two schedule parameters (t and A). The schedule function was D = A(t -X), where D is the experienced delay between a response and a reinforcer. The schedule parameter t is the maximum value below which the present contingencies occur. A is the additional delay to reinforcement for each second the response latency is shorter than the t value. When A = 0 s, the schedule is a continuous reinforcement schedule with immediate reinforcement. When A = 1 s, the schedule is a conjunctive fixed-ratio 1 fixed-time t-s schedule. When A approaches infinity, the schedule becomes a differential reinforcement of long latency schedule. The latencies for subjects with t = 10 s and t = 30 s were observed with the present schedules having seven values for A between 0 s and 11 s. In addition, the latencies for subjects for which t = 30 s were observed at an A value of 31 s to 41 s. As the A value increased, the latencies approached the t value for subjects for which t = 10 s. The latencies for 30-s-t subjects did not approach t, even when the A value was 41 s. The latencies for 10-s-t subjects at 11-s A value were longer than those under yoked conditions having exactly the same delays/interreinforcement intervals. These results demonstrated a continuum of latency related to the schedule continuum (value of A) at a small t value.

Key pecking of pigeons under variable-interval schedules of briefly signaled delayed reinforcement: effects of variable-interval value

Journal of the Experimental …, 1992

Key pecking of 4 pigeons was maintained under a multiple variable-interval 20-s variable-interval 120-s schedule of food reinforcement. When rates of key pecking were stable, a 5-s unsignaled, nonresetting delay to reinforcement separated the first peck after an interval elapsed from reinforcement in both components. Rates of pecking decreased substantially in both components. When rates were stable, the situation was changed such that the peck that began the 5-s delay also changed the color of the keylight for 0.5 s (i.e., the delay was briefly signaled). Rates increased to near-immediate reinforcement levels. In subsequent conditions, delays of 10 and 20 s, still briefly signaled, were tested. Although rates of key pecking during the component with the variable-interval 120-s schedule did not change appreciably across conditions, rates during the variable-interval 20-s component decreased greatly in 1 pigeon at the 10-s delay and decreased in all pigeons at the 20-s delay. In a control condition, the variable-interval 20-s schedule with 20-s delays was changed to a variable-interval 35-s schedule with 5-s delays, thus equating nominal rates of reinforcement. Rates of pecking increased to baseline levels. Rates of pecking, then, depended on the value of the briefly signaled delay relative to the programmed interfood times, rather than on the absolute delay value. These results are discussed in terms of similar findings in the literature on conditioned reinforcement, delayed matching to sample, and classical conditioning.

Key pecking of pigeons under variable-interval schedules of briefly signaled delayed reinforcement: a further test of Pavlovian mechanisms

Key pecking of 4 pigeons was maintained under a multiple variable-interval 20-s variable-interval 120-s schedule of food reinforcement. When rates of key pecking were stable, a 5-s unsignaled, nonresetting delay to reinforcement separated the first peck after an interval elapsed from reinforcement in both components. Rates of pecking decreased substantially in both components. When rates were stable, the situation was changed such that the peck that began the 5-s delay also changed the color of the keylight for 0.5 s (i.e., the delay was briefly signaled). Rates increased to near-immediate reinforcement levels. In subsequent conditions, delays of 10 and 20 s, still briefly signaled, were tested. Although rates of key pecking during the component with the variable-interval 120-s schedule did not change appreciably across conditions, rates during the variable-interval 20-s component decreased greatly in 1 pigeon at the 10-s delay and decreased in all pigeons at the 20-s delay. In a control condition, the variable-interval 20-s schedule with 20-s delays was changed to a variable-interval 35-s schedule with 5-s delays, thus equating nominal rates of reinforcement. Rates of pecking increased to baseline levels. Rates of pecking, then, depended on the value of the briefly signaled delay relative to the programmed interfood times, rather than on the absolute delay value. These results are discussed in terms of similar findings in the literature on conditioned reinforcement, delayed matching to sample, and classical conditioning.

Prospective and retrospective timing by pigeons

Learning & Behavior, 2010

Pigeons pecked on three keys, responses to one of which could be reinforced after a few pecks, to a second key after a somewhat larger number of pecks, and to a third key after the maximum pecking requirement. The values of the pecking requirements and the proportion of trials ending with reinforcement were varied. Transits among the keys were an orderly function of peck number, and showed approximately proportional changes with changes in the pecking requirements, consistent with Weber's law. Standard deviations of the switch points between successive keys increased more slowly within a condition than across conditions. Changes in reinforcement probability produced changes in the location of the psychometric functions that were consistent with models of timing. Analyses of the number of pecks emitted and the duration of the pecking sequences demonstrated that peck number was the primary determinant of choice, but that passage of time also played some role. We capture the basic results with a standard model of counting, which we qualify to account for the secondary experiments. introduced a technique for the study of timing in animals called categorical scaling . Trials began with the illumination of three pecking keys (right, center, left), corresponding to 3 temporal epochs; their illumination "started the clock." Responses to the right key were sometimes reinforced after a short interval (e.g. 8-s), to the center key after a longer interval (e.g., 16-s) and to the left key after the maximum interval (e.g., 32-s). On any trial the reinforcer was arranged for only one of the categorical responses at the prearranged time of reinforcement. The resulting pattern of behavior was orderly: Each bird began by pecking on the right key; if food was not obtained at the prescribed time it switched to the center key, and if not at the prescribed time for that key, they switched to the left key. Graphs of this behavioral pattern showed a decreasing rate on the "short" key as trial time passed, an increasing then decreasing rate on the "intermediate" key and, finally, an increasing rate on the "long" key (e.g., see in . The pigeons experienced different triplets of intervals (e.g., 4-8-16-s, 8-16-32-s, 16-32-64-s) and the regularities in performance, normed to the middle value, were consistent with scalar timing (e.g., , and thus with Weber's law.