Using stigmergy to incorporate the time into artificial neural networks (original) (raw)
Related papers
Using Stigmergy as a Computational Memory in the Design of Recurrent Neural Networks
Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods, 2019
In this paper, a novel architecture of Recurrent Neural Network (RNN) is designed and experimented. The proposed RNN adopts a computational memory based on the concept of stigmergy. The basic principle of a Stigmergic Memory (SM) is that the activity of deposit/removal of a quantity in the SM stimulates the next activities of deposit/removal. Accordingly, subsequent SM activities tend to reinforce/weaken each other, generating a coherent coordination between the SM activities and the input temporal stimulus. We show that, in a problem of supervised classification, the SM encodes the temporal input in an emergent representational model, by coordinating the deposit, removal and classification activities. This study lays down a basic framework for the derivation of a SM-RNN. A formal ontology of SM is discussed, and the SM-RNN architecture is detailed. To appreciate the computational power of an SM-RNN, comparative NNs have been selected and trained to solve the MNIST handwritten digits recognition benchmark in its two variants: spatial (sequences of bitmap rows) and temporal (sequences of pen strokes).
An extended architecture of recurrent neural networks that latches input information
2003
The proposed architecture of a binary artificial neural network is inspired by the structure and function of the major parts of the brain. Consequently it is divided into an input module that resemble the sensory (stimuli) area and an output module similar to the motor (responses) area. These two modules are single layer feed forward neural networks and have fixed weights to transform input patterns into a simple code and then to convert this code back to output patterns. All possible input and output patterns are stored in the weights of these two modules. Each output pattern can be produced by a single neuron of the output module asserted high. Similarly each input pattern produces a single input module neuron at binary 1. The training of this neural network is confined to connecting one output neuron of the input module at binary 1 that represents a code for the input pattern and one input neuron of the output module that produces the desired associated output pattern. Thus fast and accurate association between input and output pattern pairs can be achieved. These connections can be implemented by a crossbar switch. This crossbar switch acts similar to the thalamus in the brain which is considered to be a relay center. The role of the crossbar switch is generalized to an electric field in the gap between input and output modules and it is postulated that this field may be considered as a bridge between the brain and mental states. The input module encoder is preceded by the extended input circuit which ensures that the inverse of the input matrix exists and at the same time to make the derivation of this inverse of any order a simple task. This circuit mimics the processing function of the region in the brain that process input signals before sending them to the sensory region. Some applications of this neural network are: logical relations, mathematical operations, as a memory device and for pattern association. The number of input neurons can be increased (increased dimensionality) by multiplexing those inputs and using latches and multi-input AND gates. It is concluded that by emulating the major structures of the brain using artificial neural networks the performance of these networks can be enhanced greatly by increasing their speed, increasing their memory capacities and by performing a wide range of applications.
Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition
IEEE transactions on biomedical circuits and systems, 2017
We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits ...
Spiking Neuron Computation With the Time Machine
IEEE Transactions on Biomedical Circuits and Systems, 2012
The Time Machine (TM) is a spike-based computation architecture that represents synaptic weights in time. This choice of weight representation allows the use of virtual synapses, providing an excellent tradeoff in terms of flexibility, arbitrary weight connections and hardware usage compared to dedicated synapse architectures. The TM supports an arbitrary number of synapses and is limited only by the number of simultaneously active synapses to each neuron. SpikeSim, a behavioral hardware simulator for the architecture, is described along with example algorithms for edge detection and objection recognition. The TM can implement traditional spike-based processing as well as recently developed time mode operations where step functions serve as the input and output of each neuron block. A custom hybrid digital/analog implementation and a fully digital realization of the TM are discussed. An analog chip with 32 neurons, 1024 synapses and an address event representation (AER) block has been fabricated in 0.5 m technology. A fully digital field-programmable gate array (FPGA)-based implementation of the architecture has 6,144 neurons and 100,352 simultaneously active synapses. Both implementations utilize a digital controller for routing spikes that can process up to 34 million synapses per second.
Neural Networks and Continuous Time
2016
The fields of neural computation and artificial neural networks have developed much in the last decades. Most of the works in these fields focus on implementing and/or learning discrete functions or behavior. However, technical, physical, and also cognitive processes evolve continuously in time. This cannot be described directly with standard architectures of artificial neural networks such as multi-layer feed-forward perceptrons. Therefore, in this paper, we will argue that neural networks modeling continuous time are needed explicitly for this purpose, because with them the synthesis and analysis of continuous and possibly periodic processes in time are possible (e.g. for robot behavior) besides computing discrete classification functions (e.g. for logical reasoning). We will relate possible neural network architectures with (hybrid) automata models that allow to express continuous processes.
Understanding the computation of time using neural network models
Proceedings of the National Academy of Sciences, 2020
To maximize future rewards in this ever-changing world, animals must be able to discover the temporal structure of stimuli and then anticipate or act correctly at the right time. How do animals perceive, maintain, and use time intervals ranging from hundreds of milliseconds to multiseconds in working memory? How is temporal information processed concurrently with spatial information and decision making? Why are there strong neuronal temporal signals in tasks in which temporal information is not required? A systematic understanding of the underlying neural mechanisms is still lacking. Here, we addressed these problems using supervised training of recurrent neural network models. We revealed that neural networks perceive elapsed time through state evolution along stereotypical trajectory, maintain time intervals in working memory in the monotonic increase or decrease of the firing rates of interval-tuned neurons, and compare or produce time intervals by scaling state evolution speed. ...
Analysis of Short Term Memories for Neural Networks
Short term memory is indispensable for the processing of time varying information with artificial neural networks. In this paper a model for linear memories is presented, and ways to include memories in connectionist topologies are discussed. A comparison is drawn among different memory types, with indication of what is the salient characteristic of each memory model.