OSPEN: an open source platform for emulating neuromorphic hardware (original) (raw)

N2S3, an Open-Source Scalable Spiking Neuromorphic Hardware Simulator

HAL (Le Centre pour la Communication Scientifique Directe), 2017

One of the most promising approaches to overcome the end of Moore's law is neuromorphic computing. Indeed, neural networks already have a great impact on machine learning applications and offer very nice properties to cope with the problems of nanoelectronics manufacturing, such as a good tolerance to device variability and circuit defects, and a low activity, leading to low energy consumption. We present here N2S3 (for Neural Network Scalable Spiking Simulator), an open-source simulator that is built to help design spiking neuromorphic circuits based on nanoelectronics. N2S3 is an event-based simulator and its main properties are flexibility, extensibility, and scalability. One of our goals with the release of N2S3 as open-source software is to promote the reproducibility of research on neuromorphic hardware. We designed N2S3 to be used as a library, to be easily extended with new models and to provide a user-friendly special purpose language to describe the simulations.

Neuromorphic Computing, Architectures, Models, and Applications. A Beyond-CMOS Approach to Future Computing, June 29-July 1, 2016, Oak Ridge, TN

2016

Configurations-What should we expect from reconfigurable devices? Traditionally, devices for the part most have been static (with gradual evolutionary modifications to architecture and materials, primarily based on CMOS), and software development dependent on the system architecture, instruction set, and software stack has not changed significantly. A reconfigurable device-enabled circuit architecture requires a tight connection between the hardware and software, blurring their boundary. Research Challenge: Computational models need to be able to run at extreme scales (+exaflops) and leverage the performance of fully reconfigurable hardware that may be analog in nature and computing operations that are highly concurrent, as well as account for nonlinear behavior, energy, and physical time-dependent plasticity. (3) Learning Models-How is the system trained/programmed? Computing in general will need to move away from the stored programming model to a more dynamic, event-driven learning model that requires a broad understanding of theory behind learning and how best to apply it to a neuromorphic system. Research Challenge: Understand and apply advanced and emerging theoretical concepts from neuroscience, biology, physics, engineering, and artificial intelligence and the overall relationship to learning, understanding, and discovery to build models that will accelerate scientific progress. (4) Development System-What application development environment is needed? A neuromorphic system must be easy to teach and easy to apply to a broad set of tasks, and there should be a suitable research community and investment to do so. Research Challenge: Develop system software, algorithms, and applications to program/teach/train. (5) Applications-How can we best study and demonstrate application suitability? The type of applications that seem best suited for neuromorphic systems are yet to be well defined, but complex spatio-temporal problems that are not effectively addressed using traditional computing are a potentially large class of applications. Research Challenge: Connect theoretical formalisms, architectures, and development systems with application developers in areas that are poorly served by existing computing technologies.

Versatile Emulation of Spiking Neural Networks on an Accelerated Neuromorphic Substrate

2020 IEEE International Symposium on Circuits and Systems (ISCAS), 2020

We present first experimental results on the novel BrainScaleS-2 neuromorphic architecture based on an analog neuro-synaptic core and augmented by embedded microprocessors for complex plasticity and experiment control. The high acceleration factor of 1000 compared to biological dynamics enables the execution of computationally expensive tasks, by allowing the fast emulation of long-duration experiments or rapid iteration over many consecutive trials. The flexibility of our architecture is demonstrated in a suite of five distinct experiments, which emphasize different aspects of the BrainScaleS-2 system.

A Scalable Approach to Modeling on Accelerated Neuromorphic Hardware

2022

Neuromorphic systems open up opportunities to enlarge the explorative space for computational research. However, it is often challenging to unite efficiency and usability. This work presents the software aspects of this endeavor for the BrainScaleS-2 system, a hybrid accelerated neuromorphic hardware architecture based on physical modeling. We introduce key aspects of the BrainScaleS-2 Operating System: experiment workflow, API layering, software design, and platform operation. We present use cases to discuss and derive requirements for the software and showcase the implementation. The focus lies on novel system and software features such as multi-compartmental neurons, fast re-configuration for hardware-in-theloop training, applications for the embedded processors, the non-spiking operation mode, interactive platform access, and sustainable hardware/software co-development. Finally, we discuss further developments in terms of hardware scale-up, system usability and efficiency. * contributed equally.

Implementation of neuromorphic systems: from discrete components to analog VLSI chips (testing and communication issues)

Annali dell'Istituto superiore di sanità, 2001

We review a series of implementations of electronic devices aiming at imitating to some extent structure and function of simple neural systems, with particular emphasis on communication issues. We first provide a short overview of general features of such "neuromorphic" devices and the implications of setting up "tests" for them. We then review the developments directly related to our work at the Istituto Superiore di Sanità (ISS): a pilot electronic neural network implementing a simple classifier, autonomously developing internal representations of incoming stimuli; an output network, collecting information from the previous classifier and extracting the relevant part to be forwarded to the observer; an analog, VLSI (very large scale integration) neural chip implementing a recurrent network of spiking neurons and plastic synapses, and the test setup for it; a board designed to interface the standard PCI (peripheral component interconnect) bus of a PC with a spec...

Neuromorphic Electronic Circuits for Building Autonomous Cognitive Systems

Proceedings of the IEEE, 2014

Several analog and digital brain-inspired electronic systems have been recently proposed as dedicated solutions for fast simulations of spiking neural networks. While these architectures are useful for exploring the computational properties of large-scale models of the nervous system, the challenge of building low-power compact physical artifacts that can behave intelligently in the real-world and exhibit cognitive abilities still remains open. In this paper we propose a set of neuromorphic engineering solutions to address this challenge. In particular, we review neuromorphic circuits for emulating neural and synaptic dynamics in real-time and discuss the role of biophysically realistic temporal dynamics in hardware neural processing architectures; we review the challenges of realizing spike-based plasticity mechanisms in real physical systems and present examples of analog electronic circuits that implement them; we describe the computational properties of recurrent neural networks and show how neuromorphic Winner-Take-All circuits can implement working-memory and decision-making mechanisms. We validate the neuromorphic approach proposed with experimental results obtained from our own circuits and systems, and argue how the circuits and networks presented in this work represent a useful set of components for efficiently and elegantly implementing neuromorphic cognition.

PyNCS: a microkernel for high-level definition and configuration of neuromorphic electronic systems

Frontiers in neuroinformatics, 2014

Neuromorphic hardware offers an electronic substrate for the realization of asynchronous event-based sensory-motor systems and large-scale spiking neural network architectures. In order to characterize these systems, configure them, and carry out modeling experiments, it is often necessary to interface them to workstations. The software used for this purpose typically consists of a large monolithic block of code which is highly specific to the hardware setup used. While this approach can lead to highly integrated hardware/software systems, it hampers the development of modular and reconfigurable infrastructures thus preventing a rapid evolution of such systems. To alleviate this problem, we propose PyNCS, an open-source front-end for the definition of neural network models that is interfaced to the hardware through a set of Python Application Programming Interfaces (APIs). The design of PyNCS promotes modularity, portability and expandability and separates implementation from hardwa...

Neuromorphic hardware in the loop: Training a deep spiking network on the BrainScaleS wafer-scale system

2017 International Joint Conference on Neural Networks (IJCNN), 2017

Emulating spiking neural networks on analog neuromorphic hardware offers several advantages over simulating them on conventional computers, particularly in terms of speed and energy consumption. However, this usually comes at the cost of reduced control over the dynamics of the emulated networks. In this paper, we demonstrate how iterative training of a hardware-emulated network can compensate for anomalies induced by the analog substrate. We first convert a deep neural network trained in software to a spiking network on the BrainScaleS wafer-scale neuromorphic system, thereby enabling an acceleration factor of 10 000 compared to the biological time domain. This mapping is followed by the in-the-loop training, where in each training step, the network activity is first recorded in hardware and then used to compute the parameter updates in software via backpropagation. An essential finding is that the parameter updates do not have to be precise, but only need to approximately follow the correct gradient, which simplifies the computation of updates. Using this approach, after only several tens of iterations, the spiking network shows an accuracy close to the ideal software-emulated prototype. The presented techniques show that deep spiking networks emulated on analog neuromorphic devices can attain good computational performance despite the inherent variations of the analog substrate.

Real-time cortical simulation on neuromorphic hardware

Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences

Real-time simulation of a large-scale biologically representative spiking neural network is presented, through the use of a heterogeneous parallelization scheme and SpiNNaker neuromorphic hardware. A published cortical microcircuit model is used as a benchmark test case, representing ≈1 mm 2 of early sensory cortex, containing 77 k neurons and 0.3 billion synapses. This is the first hard real-time simulation of this model, with 10 s of biological simulation time executed in 10 s wall-clock time. This surpasses best-published efforts on HPC neural simulators (3 × slowdown) and GPUs running optimized spiking neural network (SNN) libraries (2 × slowdown). Furthermore, the presented approach indicates that real-time processing can be maintained with increasing SNN size, breaking the communication barrier incurred by traditional computing machinery. Model results are compared to an established HPC simulator baseline to verify simulation correctness, comparing well across a range of stati...

A Survey of Neuromorphic Computing and Neural Networks in Hardware

arXiv (Cornell University), 2017

Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture. This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems. The promise of the technology is to create a brainlike ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities. In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history. We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications. We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled. The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed.