Reply to Aksentijevic: It is a matter of what is countable and how neurons learn (original) (raw)

Investigation of sequence processing: A cognitive and computational neuroscience perspective

Serial order processing or sequence processing underlies many human activities such as speech, language, skill learning, planning, problem-solving, etc. Investigating the neural bases of sequence processing enables us to understand serial order in cognition and also helps in building intelligent devices. In this article, we review various cognitive issues related to sequence processing with examples. Experimental results that give evidence for the involvement of various brain areas will be described. Finally, a theoretical approach based on statistical models and reinforcement learning paradigm is presented. These theoretical ideas are useful for studying sequence learning in a principled way. This article also suggests a two-way process diagram integrating experimentation (cognitive neuroscience) and theory/ computational modelling (computational neuroscience). This integrated framework is useful not only in the present study of serial order, but also for understanding many cognitive processes.

Latent structure in random sequences drives neural learning toward a rational bias

Proceedings of the National Academy of Sciences of the United States of America, 2015

People generally fail to produce random sequences by overusing alternating patterns and avoiding repeating ones-the gambler's fallacy bias. We can explain the neural basis of this bias in terms of a biologically motivated neural model that learns from errors in predicting what will happen next. Through mere exposure to random sequences over time, the model naturally develops a representation that is biased toward alternation, because of its sensitivity to some surprisingly rich statistical structure that emerges in these random sequences. Furthermore, the model directly produces the best-fitting bias-gain parameter for an existing Bayesian model, by which we obtain an accurate fit to the human data in random sequence production. These results show that our seemingly irrational, biased view of randomness can be understood instead as the perfectly reasonable response of an effective learning mechanism to subtle statistical structure embedded in random sequences.

The Neural Representation of Sequences: From Transition Probabilities to Algebraic Patterns and Linguistic Trees

Neuron, 2015

A sequence of images, sounds, or words can be stored at several levels of detail, from specific items and their timing to abstract structure. We propose a taxonomy of five distinct cerebral mechanisms for sequence coding: transitions and timing knowledge, chunking, ordinal knowledge, algebraic patterns, and nested tree structures. In each case, we review the available experimental paradigms and list the behavioral and neural signatures of the systems involved. Tree structures require a specific recursive neural code, as yet unidentified by electrophysiology, possibly unique to humans, and which may explain the singularity of human language and cognition.