Interval timing by neural integration: Supplementary Materials (original) (raw)

A model of interval timing by neural integration

2011

We show that simple assumptions about neural processing lead to a model of interval timing as a temporal integration process, in which a noisy firing-rate representation of time rises linearly on average toward a response threshold over the course of an interval. Our assumptions include: that neural spike trains are approximately independent Poisson processes, that correlations among them can be largely cancelled by balancing excitation and inhibition, that neural populations can act as integrators, and that the objective of timed behavior is maximal accuracy and minimal variance. The model accounts for a variety of physiological and behavioral findings in rodents, monkeys, and humans, including ramping firing rates between the onset of reward-predicting cues and the receipt of delayed rewards, and universally scale-invariant response time distributions in interval timing tasks. It furthermore makes specific, well-supported predictions about the skewness of these distributions, a feature of timing data that is usually ignored. The model also incorporates a rapid (potentially one-shot) duration-learning procedure. Human behavioral data support the learning rule's predictions regarding learning speed in sequences of timed responses. These results suggest that simple, integration-based models should play as prominent a role in interval timing theory as they do in theories of perceptual decision making, and that a common neural mechanism may underlie both types of behavior.

Timing as an intrinsic property of neural networks: evidence from in vivo and in vitro experiments

Philosophical Transactions of the Royal Society B: Biological Sciences, 2014

The discrimination and production of temporal patterns on the scale of hundreds of milliseconds are critical to sensory and motor processing. Indeed, most complex behaviours, such as speech comprehension and production, would be impossible in the absence of sophisticated timing mechanisms. Despite the importance of timing to human learning and cognition, little is known about the underlying mechanisms, in particular whether timing relies on specialized dedicated circuits and mechanisms or on general and intrinsic properties of neurons and neural circuits. Here, we review experimental data describing timing and interval-selective neurons in vivo and in vitro. We also review theoretical models of timing, focusing primarily on the state-dependent network model, which proposes that timing in the subsecond range relies on the inherent time-dependent properties of neurons and the active neural dynamics within recurrent circuits. Within this framework, time is naturally encoded in populations of neurons whose pattern of activity is dynamically changing in time. Together, we argue that current experimental and theoretical studies provide sufficient evidence to conclude that at least some forms of temporal processing reflect intrinsic computations based on local neural network dynamics.

Precision of Neural Timing: Effects of Convergence and Time-Windowing

2002

We study the improvement in timing accuracy in a neural system having n identical input neurons projecting to one target neuron. The n input neurons receive the same stimulus but fire at stochastic times selected from one of four specified probability densities, f , each with standard deviation 1.0 msec. The target cell fires if and when it receives m inputs within a time window of msec. Let σ n,m, denote the standard deviation of the time of firing of the target neuron (i.e. the standard deviation of the target neuron's latency relative to the arrival time of the stimulus). Mathematical analysis shows that σ n,m, is a very complicated function of n, m, and. Typically, σ n,m, is a non-monotone function of m and and the improvement of timing accuracy is highly dependent of the shape of the probability density for the time of firing of the input neurons. For appropriate choices of m, , and f , the standard deviation σ n,m, may be as low as 1 n. Thus, depending on these variables, remarkable improvements in timing accuracy of such a stochastic system may occur.

Functional modelling of the human timing mechanism

2001

Behaviour occurs in time, and precise timing in the range of seconds and fractions of seconds is for most living organisms necessary for successful interaction with the environment. Our ability to time discrete actions and to predict events on the basis of prior events indicates the existence of an internal timing mechanism. The nature of this mechanism provides essential constraints on models of the functional organisation of the brain. The present work indicates that there are discontinuities in the function of time close to 1 s and 1.4 s, both in the amount of drift in a series of produced intervals (Study I) and in the detectability of drift in a series of sounds (Study II). The similarities across different tasks further suggest that action and perceptual judgements are governed by the same (kind of) mechanism. Study III showed that series of produced intervals could be characterised by different amounts of positive fractal dependency related to the aforementioned discontinuities. In conjunction with other findings in the literature, these results suggest that timing of intervals up to a few seconds is strongly dependent on previous intervals and on the duration to be timed. This argues against a clock-counter mechanism, as proposed by scalar timing theory, according to which successive intervals are random and the size of the timing error conforms to Weber's law. A functional model is proposed, expressed in an autoregressive framework, which consists of a single-interval timer with error corrective feedback. The duration-specificity of the proposed model is derived from the order of error correction, as determined by a semi-flexible temporal integration span.

Time representation in neural network models trained to perform interval timing tasks

arXiv: Neurons and Cognition, 2019

To predict and maximize future rewards in this ever-changing world, animals must be able to discover the temporal structure of stimuli and then take the right action at the right time. However, we still lack a systematic understanding of the neural mechanism of how animals perceive, maintain, and use time intervals ranging from hundreds of milliseconds to multi-seconds in working memory and appropriately combine time information with spatial information processing and decision making. Here, we addressed this problem by training neural network models on four timing tasks: interval production, interval comparison, timed spatial reproduction, and timed decision making. We studied time-coding principles of the network after training, and found them consistent with existing experimental observations. We reveal that neural networks perceive time intervals through the evolution of population state along a stereotypical trajectory, maintain time intervals by line attractors along which the ...

Encoding time in neural dynamic regimes with distinct computational tradeoffs

2021

Converging evidence suggests the brain encodes time in time-varying patterns of neural activity, including neural sequences, ramping activity, and complex dynamics. Temporal tasks that require producing the same time-dependent output patterns may have distinct computational requirements in regard to the need to exhibit temporal scaling or generalize to novel contexts. It is not known how neural circuits can both encode time and satisfy distinct computational and generalization requirements, it is also not known whether similar patterns of neural activity at the population level can emerge from distinctly different network configurations. To begin to answer these questions, we trained RNNs on two timing tasks based on behavioral studies. The tasks had different input structures but required producing identically timed output patterns. Using a novel framework we quantified whether RNNs encoded two intervals using either of three different timing strategies: scaling, absolute, or stimu...

Neural Field Model for Measuring and Reproducing Time Intervals

Artificial Neural Networks and Machine Learning – ICANN 2019: Theoretical Neural Computation, 2019

The continuous real-time motor interaction with our environment requires the capacity to measure and produce time intervals in a highly flexible manner. Recent neurophysiological evidence suggests that the neural computational principles supporting this capacity may be understood from a dynamical systems perspective: Inputs and initial conditions determine how a recurrent neural network evolves from a "resting state" to a state triggering the action. Here we test this hypothesis in a time measurement and time reproduction experiment using a model of a robust neural integrator based on the theoretical framework of dynamic neural fields. During measurement, the temporal accumulation of input leads to the evolution of a self-stabilized bump whose amplitude reflects elapsed time. During production, the stored information is used to reproduce on a trial-by-trial basis the time interval either by adjusting input strength or initial condition of the integrator. We discuss the impact of the results on our goal to endow autonomous robots with a human-like temporal cognition capacity for natural human-robot interactions.

Reinforcement Learning and Time Perception a Model of Animal Experiments

2002

Animal data on delayed-reward conditioning experiments shows a striking property-the data for different time intervals collapses into a single curve when the data is scaled by the time interval. This is called the scalar property of interval timing. Here a simple model of a neural clock is presented and shown to give rise to the scalar property. The model is an accumulator consisting of noisy, linear spiking neurons. It is analytically tractable and contains only three parameters. When coupled with reinforcement learning it simulates peak procedure experiments, producing both the scalar property and the pattern of single trial covariances.

On the content of learning in interval timing: Representations or associations?

Behavioural Processes, 2013

Models of timing differ on two fundamental issues, the form of the representation and the content of learning. First, regarding the representation of time some models assume a linear encoding, others a logarithmic encoding. Second, regarding the content of learning cognitive models assume that the animal learns explicit representations of the intervals relevant to the task and that their behavior is based on a comparison of those representations, whereas associative models assume that the animal learns associations between its representations of time and responding, which then drive performance. In this paper, we show that some key empirical findings (timescale invariant psychometric curves, bisection point at the geometric mean of the trained durations in the bisection procedure, and location of the indifference point in the time-left procedure) seem to make these two issues interdependent. That is, cognitive models seem to entail a linear representation of time, and at least a certain class of associative models seem to entail a log representation of time. These interdependencies suggest new ways to compare and contrast timing models. This article is part of a Special Issue entitled: SQAB 2012.