Computing with Spiking Neuron Networks A Review (original) (raw)

A Short Survey of the Development and Applications of Spiking Neural Networks of High Biological Plausibility

Buletinul Institutului Politehnic din Iaşi, 2022

Spiking neural networks (SNNs) are inspired from natural computing, modelling with high accuracy the interactions and processes between the synapses of the neurons focusing on low response time and energy efficiency. This novel paradigm of event-based processing opens new opportunities for discovering applications and developing efficient learning methods that should highlight the advantages of SNNs such as the large memory capacity and the fast adaptation, while preserving the easy-to-use and portability of the conventional computing architectures. In this paper, we do a brief review of the developments of the past decades in the field of SNNs. We start with a brief history of the SNN and summarize the most common models of spiking neurons and methods to implement synaptic plasticity. We also classify the SNNs according to the implemented learning rules and network topology. We present the computational advantages, liabilities, and applications suitable for using SNNs in terms of energy efficiency and response time. In addition, we briefly sweep through the existing platforms and simulation frameworks for

Spiking Neural Networks for Computational Intelligence: An Overview

Big Data and Cognitive Computing

Deep neural networks with rate-based neurons have exhibited tremendous progress in the last decade. However, the same level of progress has not been observed in research on spiking neural networks (SNN), despite their capability to handle temporal data, energy-efficiency and low latency. This could be because the benchmarking techniques for SNNs are based on the methods used for evaluating deep neural networks, which do not provide a clear evaluation of the capabilities of SNNs. Particularly, the benchmarking of SNN approaches with regards to energy efficiency and latency requires realization in suitable hardware, which imposes additional temporal and resource constraints upon ongoing projects. This review aims to provide an overview of the current real-world applications of SNNs and identifies steps to accelerate research involving SNNs in the future.

Introduction to spiking neural networks: Information processing, learning and applications

Acta neurobiologiae experimentalis, 2011

The concept that neural information is encoded in the firing rate of neurons has been the dominant paradigm in neurobiology for many years. This paradigm has also been adopted by the theory of artificial neural networks. Recent physiological experiments demonstrate, however, that in many parts of the nervous system, neural code is founded on the timing of individual action potentials. This finding has given rise to the emergence of a new class of neural models, called spiking neural networks. In this paper we summarize basic properties of spiking neurons and spiking networks. Our focus is, specifically, on models of spike-based information coding, synaptic plasticity and learning. We also survey real-life applications of spiking models. The paper is meant to be an introduction to spiking neural networks for scientists from various disciplines interested in spike-based neural processing.

Learning Methods of Recurrent Spiking Neural Networks

Transactions of the Institute of Systems, Control and Information Engineers, 2000

In an artificial Spiking Neural Network (SNN) the information processing and transmission are carried out by spike trains in a manner similar to the generic biological neurons. Recently it has been reported that they are computationally more powerful than the conventional neural networks. Yet, there are no well defined efficient methods for learning due to their rather intricately discontinuous and nonlinear mechanisms. In this paper, we consider a recurrent SNN constructed with integrate-and-fire type spiking neurons. First we propose a learning method such that the SNN possesses desired transient responses (spike-train outputs) by changing the synaptic weights. Further by including periodic state conditions we propose a learning method such that the SNN possesses desired oscillatory responses (limit cycle spike train) by changing both the synaptic weights and the initial conditions. Simulation examples are also provided to verify the efficiency and the applicability of the proposed algorithm.

A Neuroscience-Inspired Approach to Training Spiking Neural Networks

2020

Spiking neural networks (SNNs) have recently gained a lot of attention for use in low-power neuromorphic and edge computing. On their own, SNNs are difficult to train, owing to their lack of a differentiable activation function and their inherent tendency towards chaotic behavior. This work takes a strictly neuroscience-inspired approach to designing and training SNNs. We demonstrate that the use of neuromodulated synaptic time dependent plasticity (STDP) can be used to create a variety of different learning paradigms including unsupervised learning, semi-supervised learning, and reinforcement learning. In order to tackle the highly dynamic and potentially chaotic spiking behavior of SNNs both during training and testing, we discuss a variety of neuroscience-inspired hoemeostatic mechanisms for keeping the network\u27s activity in a healthy range. All of these concepts are brought together in the development of a SNN model that is trained and tested on the MNIST handwritten digits d...

Practical applications of spiking neural network in information processing and learning

Historically, much of the research effort to contemplate the neural mechanisms involved in information processing in the brain has been spent with neuronal circuits and synaptic organization, basically neglecting the electrophysiological properties of the neurons. In this paper we present instances of a practical application using spiking neurons and temporal coding to process information, building a spiking neural network – SNN to perform a clustering task. The input is encoded by means of receptive fields. The delay and weight adaptation uses a multiple synapse approach. Dividing each synapse into sub-synapses, each one with a different fixed delay. The delay selection is then performed by a Hebbian reinforcement learning algorithm, also keeping resemblance with biological neural networks.

Supervised learning with spiking neural networks

IJCNN'01. International Joint Conference on Neural Networks. Proceedings (Cat. No.01CH37222)

In this paper we derive a supervised learning algorithm for a spiking neural network which encodes information in the timing of spike trains. This algorithm is similar to the classical error back propagation algorithm for sigmoidal neural network but the learning parameter is adaptively changed. The algorithm is applied to the complex nonlinear classification problem and the results show that the spiking neural network is capable of performing nonlinearly separable classification tasks. Several issues concerning the spiking neural network are discussed.

Supervised Learning Based on Temporal Coding in Spiking Neural Networks

IEEE Transactions on Neural Networks and Learning Systems, 2017

Gradient descent training techniques are remarkably successful in training analogvalued artificial neural networks (ANNs). Such training techniques, however, do not transfer easily to spiking networks due to the spike generation hard nonlinearity and the discrete nature of spike communication. We show that in a feedforward spiking network that uses a temporal coding scheme where information is encoded in spike times instead of spike rates, the network input-output relation is differentiable almost everywhere. Moreover, this relation is piece-wise linear after a transformation of variables. Methods for training ANNs thus carry directly to the training of such spiking networks as we show when training on the permutation invariant MNIST task. In contrast to rate-based spiking networks that are often used to approximate the behavior of ANNs, the networks we present spike much more sparsely and their behavior can not be directly approximated by conventional ANNs. Our results highlight a new approach for controlling the behavior of spiking networks with realistic temporal dynamics, opening up the potential for using these networks to process spike patterns with complex temporal information.

Gradient Descent for Spiking Neural Networks

Much of studies on neural computation are based on network models of static neurons that produce analog output, despite the fact that information processing in the brain is predominantly carried out by dynamic neurons that produce discrete pulses called spikes. Research in spike-based computation has been impeded by the lack of efficient supervised learning algorithm for spiking networks. Here, we present a gradient descent method for optimizing spiking network models by introducing a differentiable formulation of spiking networks and deriving the exact gradient calculation. For demonstration, we trained recurrent spiking networks on two dynamic tasks: one that requires optimizing fast (≈ millisecond) spike-based interactions for efficient encoding of information, and a delayed-memory XOR task over extended duration (≈ second). The results show that our method indeed optimizes the spiking network dynamics on the time scale of individual spikes as well as the behavioral time scales. In conclusion, our result offers a general purpose supervised learning algorithm for spiking neural networks, thus advancing further investigations on spike-based computation.

Spiking Neural Networks and Their Applications: A Review

Brain Sciences

The past decade has witnessed the great success of deep neural networks in various domains. However, deep neural networks are very resource-intensive in terms of energy consumption, data requirements, and high computational costs. With the recent increasing need for the autonomy of machines in the real world, e.g., self-driving vehicles, drones, and collaborative robots, exploitation of deep neural networks in those applications has been actively investigated. In those applications, energy and computational efficiencies are especially important because of the need for real-time responses and the limited energy supply. A promising solution to these previously infeasible applications has recently been given by biologically plausible spiking neural networks. Spiking neural networks aim to bridge the gap between neuroscience and machine learning, using biologically realistic models of neurons to carry out the computation. Due to their functional similarity to the biological neural netwo...