The TENNLab Suite of LIDAR-Based Control Applications for Recurrent , Spiking , Neuromorphic Systems (original) (raw)
Related papers
A neuromorphic controller for a robotic vehicle equipped with a dynamic vision sensor
Neuromorphic electronic systems exhibit advantageous characteristics, in terms of low energy consumption and low response latency, which can be useful in robotic applications that require compact and low power embedded computing resources. However, these neuromorphic circuits still face significant limitations that make their usage challenging: these include low precision, variability of components, sensitivity to noise and temperature drifts, as well as the currently limited number of neurons and synapses that are typically emulated on a single chip. In this paper, we show how it is possible to achieve functional robot control strategies using a mixed signal analog/digital neuromorphic processor interfaced to a mobile robotic platform equipped with an event-based dynamic vision sensor. We provide a proof of concept implementation of obstacle avoidance and target acquisition using biologically plausible spiking neural networks directly emulated by the neuromorphic hardware. To our knowledge, this is the first demonstration of a working spike-based neuromorphic robotic controller in this type of hardware which illustrates the feasibility, as well as limitations, of this approach.
SpikiLi: A Spiking Simulation of LiDAR based Real-time Object Detection for Autonomous Driving
2022 8th International Conference on Event-Based Control, Communication, and Signal Processing (EBCCSP)
Spiking Neural Networks are a recent and new neural network design approach that promises tremendous improvements in power efficiency, computation efficiency, and processing latency. They do so by using asynchronous spikebased data flow, event-based signal generation, processing, and modifying the neuron model to resemble biological neurons closely. While some initial works have shown significant initial evidence of applicability to common deep learning tasks, their applications in complex real-world tasks has been relatively low. In this work, we first illustrate the applicability of spiking neural networks to a complex deep learning task namely Lidar based 3D object detection for automated driving. Secondly, we make a step-by-step demonstration of simulating spiking behavior using a pre-trained convolutional neural network. We closely model essential aspects of spiking neural networks in simulation and achieve equivalent run-time and accuracy on a GPU. When the model is realized on a neuromorphic hardware, we expect to have significantly improved power efficiency.
The Backpropagation Algorithm Implemented on Spiking Neuromorphic Hardware
2021
The capabilities of natural neural systems have inspired new generations of machine learning algorithms as well as neuromorphic very large-scale integrated (VLSI) circuits capable of fast, low-power information processing. However, it has been argued that most modern machine learning algorithms are not neurophysiologically plausible. In particular, the workhorse of modern deep learning, the backpropagation algorithm, has proven difficult to translate to neuromorphic hardware. In this study, we present a neuromorphic, spiking backpropagation algorithm based on synfire-gated dynamical information coordination and processing, implemented on Intel's Loihi neuromorphic research processor. We demonstrate a proof-of-principle three-layer circuit that learns to classify digits from the MNIST dataset. To our knowledge, this is the first work to show a Spiking Neural Network (SNN) implementation of the backpropagation algorithm that is fully on-chip, without a computer in the loop. It is ...
GRANT: Ground-Roaming Autonomous Neuromorphic Targeter
2020
In this work we describe the design, implementation, and testing of the first neuromorphic robot capable of obstacle avoidance, grid coverage, and targeting controlled by the second generation Dynamic Adaptive Neural Network Array (DANNA2) digital spiking neuromorphic processor. The simplicity of the DANNA2 processor along with the TENNLab hardware/software co-design framework allows for compact spiking networks that can run efficiently on a small, resource-constrained, platform such as a Xilinx Artix-7 field-programmable gate array. Additionally, we present the dynamic reconfigurability of DANNA2 arrays as a method of realizing complex, multi-objective tasks on hardware that is restricted to relatively small networks.
A Long Short-Term Memory for AI Applications in Spike-based Neuromorphic Hardware
Nature Machine Intelligence
In spite of intensive efforts it has remained an open problem to what extent current Artificial Intelligence (AI) methods that employ Deep Neural Networks (DNNs) can be implemented more energy-efficiently on spike-based neuromorphic hardware. This holds in particular for AI methods that solve sequence processing tasks, a primary application target for spike-based neuromorphic hardware. One difficulty is that DNNs for such tasks typically employ Long Short-Term Memory (LSTM) units. Yet an efficient emulation of these units in spike-based hardware has been missing. We present a biologically inspired solution that solves this problem. This solution enables us to implement a major class of DNNs for sequence processing tasks such as time series classification and question answering with substantial energy savings on neuromorphic hardware. In fact, the Relational Network for reasoning about relations between objects that we use for question answering is the first example of a large DNN that carries out a sequence processing task with substantial energy-saving on neuromorphic hardware. Energy consumption is a major impediment for more widespread applications of new AI-methods that use DNNs, especially in edge devices. Spike-based neuromorphic hardware is one direction that promises to alleviate this problem. This research direction is partially motivated by the method that brains use to run even more complex and larger neural networks than those DNNs that are used in current AI, with a total energy consumption of just 20W: Neurons in the brain only rarely emit spikes which mostly triggers energy consumption in neurons and synapses. But it has remained an open problem as to how DNNs that are needed for modern AI solutions could be implemented in neuromorphic hardware in such a sparse firing mode. Another open problem is how the LSTM units of such DNNs, that are needed for providing a working memory for sequence processing tasks, could be implemented in spike-based neuromorphic hardware. We present a biologically inspired solution to the second problem, that simultaneously provides a step towards also solving the first problem, since it reduces the firing activity of neurons that hold working memory content. We combine this method with a brain-inspired technique called membrane voltage regularization for enforcing sparse firing activity during the training of the DNN. We have tested the impact of these two innovations on computational performance and energy consumption for two benchmark tasks in an implementation on a representative spike-based chip: Intel's neuromorphic research chip Loihi [Davies et al., 2018]. We find significant reductions in the energy-delay product (EDP). In contrast to power, EDP accounts for the true energy and time cost per task/workload/computation. Simultaneously, these implementations demonstrate that two hallmarks of cognitive computations, both in brains and in machine intelligence, working memory and reasoning about relations between concepts or objects, can in fact be implemented more efficiently in spike-based neuromorphic hardware than in GPUs, the standard computing hardware for implementing DNNs. Implementing a long short-term memory in spike-based neuromorphic hardware Working memory is maintained in an LSTM unit in a special memory cell, to which read-and write-access is gated by trained neurons with sigmoidal activation function [Hochreiter and Schmidhuber, 1997]. Such an LSTM unit is difficult to realize efficiently in spike-based hardware. However, it turns out that by simply adding a standard feature of some biological neurons, slow after-hyperpolarizing (AHP) currents, a spiking neural network (SNN) acquires similar working memory capabilities as LSTM units over the time scale of the AHP currents. These AHP currents lower the membrane potential of a neuron after each of its spikes (see Fig. 1). Furthermore, these AHP currents can easily be implemented on Loihi with the desirable side benefit of reducing firing activity, and therefore
2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020
Energy-efficient mapless navigation is crucial for mobile robots as they explore unknown environments with limited on-board resources. Although the recent deep reinforcement learning (DRL) approaches have been successfully applied to navigation, their high energy consumption limits their use in many robotic applications. Here, we propose a neuromorphic approach that combines the energy-efficiency of spiking neural networks with the optimality of DRL to learn control policies for mapless navigation. Our hybrid framework, Spiking deep deterministic policy gradient (SDDPG), consists of a spiking actor network (SAN) and a deep critic network, where the two networks were trained jointly using gradient descent. The trained SAN was deployed on Intel's Loihi neuromorphic processor. The co-learning enabled synergistic information exchange between the two networks, allowing them to overcome each other's limitations through a shared representation learning. When validated on both simulated and real-world complex environments, our method on Loihi not only consumed 75 times less energy per inference as compared to DDPG on Jetson TX2, but also had a higher rate of successfully navigating to the goal which ranged by 1% to 4.2%, depending on the forward-propagation timestep size. These results reinforce our ongoing effort to design braininspired algorithms for controlling autonomous robots with neuromorphic hardware.
Neuromorphic technologies for defence and security
2020
Despite the highly promising advances in Machine Learning (ML) and Deep Learning (DL) in recent years, DL requires significant hardware acceleration to be effective, as it is rather computationally expensive. Moreover, miniaturisation of electronic devices requires small form-factor processing units, with reduced SWaP (Size,Weight and Power) profile. Therefore, a completely new processing paradigm is needed to address both issues. In this context, the concept of neuromorphic (NM) engineering provides an attractive alternative, seen as the analog/digital implementation of biologically brain inspired neural networks. NM systems propagate spikes as means of processing data, with the information being encoded in the timing and rate of spikes generated by each neuron of a so-called spiking neural network (SNN). Based on this, the key advantages of SNNs are: less computational power required, more efficient and faster processing, much lower power consumption. This paper reports on the cur...
2016
Configurations-What should we expect from reconfigurable devices? Traditionally, devices for the part most have been static (with gradual evolutionary modifications to architecture and materials, primarily based on CMOS), and software development dependent on the system architecture, instruction set, and software stack has not changed significantly. A reconfigurable device-enabled circuit architecture requires a tight connection between the hardware and software, blurring their boundary. Research Challenge: Computational models need to be able to run at extreme scales (+exaflops) and leverage the performance of fully reconfigurable hardware that may be analog in nature and computing operations that are highly concurrent, as well as account for nonlinear behavior, energy, and physical time-dependent plasticity. (3) Learning Models-How is the system trained/programmed? Computing in general will need to move away from the stored programming model to a more dynamic, event-driven learning model that requires a broad understanding of theory behind learning and how best to apply it to a neuromorphic system. Research Challenge: Understand and apply advanced and emerging theoretical concepts from neuroscience, biology, physics, engineering, and artificial intelligence and the overall relationship to learning, understanding, and discovery to build models that will accelerate scientific progress. (4) Development System-What application development environment is needed? A neuromorphic system must be easy to teach and easy to apply to a broad set of tasks, and there should be a suitable research community and investment to do so. Research Challenge: Develop system software, algorithms, and applications to program/teach/train. (5) Applications-How can we best study and demonstrate application suitability? The type of applications that seem best suited for neuromorphic systems are yet to be well defined, but complex spatio-temporal problems that are not effectively addressed using traditional computing are a potentially large class of applications. Research Challenge: Connect theoretical formalisms, architectures, and development systems with application developers in areas that are poorly served by existing computing technologies.
Frontiers in Neuroscience, 2015
Implementing compact, low-power artificial neural processing systems with real-time on-line learning abilities is still an open challenge. In this paper we present a full-custom mixed-signal VLSI device with neuromorphic learning circuits that emulate the biophysics of real spiking neurons and dynamic synapses for exploring the properties of computational neuroscience models and for building brain-inspired computing systems. The proposed architecture allows the on-chip configuration of a wide range of network connectivities, including recurrent and deep networks, with short-term and long-term plasticity. The device comprises 128 K analog synapse and 256 neuron circuits with biologically plausible dynamics and bi-stable spike-based plasticity mechanisms that endow it with on-line learning abilities. In addition to the analog circuits, the device comprises also asynchronous digital logic circuits for setting different synapse and neuron properties as well as different network configurations. This prototype device, fabricated using a 180 nm 1P6M CMOS process, occupies an area of 51.4 mm 2 , and consumes approximately 4 mW for typical experiments, for example involving attractor networks. Here we describe the details of the overall architecture and of the individual circuits and present experimental results that showcase its potential. By supporting a wide range of cortical-like computational modules comprising plasticity mechanisms, this device will enable the realization of intelligent autonomous systems with on-line learning capabilities.
Systematic configuration and automatic tuning of neuromorphic systems
2011
In the past recent years several research groups have proposed neuromorphic Very Large Scale Integration (VLSI) devices that implement event-based sensors or biophysically realistic networks of spiking neurons. It has been argued that these devices can be used to build event-based systems, for solving real-world applications in real-time, with efficiencies and robustness that cannot be achieved with conventional computing technologies.