Synthesis of Bayesian Machines On FPGAs Using Stochastic Arithmetic (original) (raw)
Design of Stochastic Machines Dedicated to Approximate Bayesian Inferences
IEEE Transactions on Emerging Topics in Computing, 2016
We present an architecture and a compilation toolchain for stochastic machines dedicated to Bayesian inferences. These machines are not Von Neumann and code information with stochastic bitstreams instead of using floating point representations. They only rely on stochastic arithmetic and on Gibbs sampling to perform approximate inferences. They use banks of binary random generators which capture the prior knowledge on which the inference is built. The output of the machine is devised to continuously sample the joint probability distribution of interest. While the method is explained on a simple example, we show that our machine computes a good approximation of the solution to a problem intractable in exact inference.
Hardware-level Bayesian inference
2017
Brain-inspired, inherently parallel computation has been proven to excel at tasks where the intrinsically serial Von Neumann architecture struggles. This has led to vast efforts aimed towards developing bio-inspired electronics, most notably in the guise of artificial neural networks (ANNs). However, ANNs are simply one possible substrate upon which computation can be carried out; their configuration determining what sort of computational function is being performed. In this work we show how Bayesian inference, a fundamental computational function, can be carried out using arrays of memristive devices, demonstrating computation directly using probability distributions as inputs and outputs. Our approach bypasses the need to map the Bayesian computation on an ANN (or any other) substrate since computation is carried out by simply providing the input distributions and letting Ohm’s law converge the voltages within the system to the correct answer. We show the fundamental circuit block...
A Reconfigurable Architecture with Sequential Logic-Based Stochastic Computing
ACM Journal on Emerging Technologies in Computing Systems, 2017
Computations based on stochastic bit streams have several advantages compared to deterministic binary radix computations, including low power consumption, low hardware cost, high fault tolerance, and skew tolerance. To take advantage of this computing technique, previous work proposed a combinational logic-based reconfigurable architecture to perform complex arithmetic operations on stochastic streams of bits. The long execution time and the cost of converting between binary and stochastic representations, however, make the stochastic architectures less energy efficient than the deterministic binary implementations. This article introduces a methodology for synthesizing a given target function stochastically using finite-state machines (FSMs), and enhances and extends the reconfigurable architecture using sequential logic. Compared to the previous approach, the proposed reconfigurable architecture can save hardware area and energy consumption by up to 30% and 40%, respectively, whil...
An Architecture for Fault-Tolerant Computation with Stochastic Logic
2011
Mounting concerns over variability, defects and noise motivate a new approach for digital circuitry: stochastic logic, that is to say, logic that operates on probabilistic signals and so can cope with errors and uncertainty. Techniques for probabilistic analysis of circuits and systems are well established. We advocate a strategy for synthesis. In prior work, we described a methodology for synthesizing stochastic logic, that is to say logic that operates on probabilistic bit streams. In this paper, we apply the concept of stochastic logic to a reconfigurable architecture that implements processing operations on a datapath. We analyze cost as well as the sources of error: approximation, quantization, and random fluctuations. We study the effectiveness of the architecture on a collection of benchmarks for image processing. The stochastic architecture requires less area than conventional hardware implementations. Moreover, it is much more tolerant of soft errors (bit flips) than these deterministic implementations. This fault tolerance scales gracefully to very large numbers of errors.
Stochastic Bayesian Computation for Autonomous Robot Sensorimotor System
2015
This paper presents a stochastic computing implementation of a Bayesian sensorimotor system that performs obstacle avoidance for an autonomous robot. In a previous work we have shown that we are able to automatically design a probabilistic machine which computes inferences on a Bayesian model using stochastic arithmetic. We start from a high level Bayesian model description, then our compiler generates an electronic circuit, corresponding to the probabilistic inference, operating on stochastic bit streams. Our goal in this paper is to show that our compilation toolchain and simulation device work on a classic robotic application, sensor fusion for obstacle avoidance. The novelty is in the way the computations are implemented, opening the way for future low power autonomous robots using such circuits to perform Bayesian Inference.
Bayesian Estimation and Inference Using Stochastic Electronics
Frontiers in Neuroscience, 2016
In this paper, we present the implementation of two types of Bayesian inference problems to demonstrate the potential of building probabilistic algorithms in hardware using single set of building blocks with the ability to perform these computations in real time. The first implementation, referred to as the BEAST (Bayesian Estimation and Stochastic Tracker), demonstrates a simple problem where an observer uses an underlying Hidden Markov Model (HMM) to track a target in one dimension. In this implementation, sensors make noisy observations of the target position at discrete time steps. The tracker learns the transition model for target movement, and the observation model for the noisy sensors, and uses these to estimate the target position by solving the Bayesian recursive equation online. We show the tracking performance of the system and demonstrate how it can learn the observation model, the transition model, and the external distractor (noise) probability interfering with the observations. In the second implementation, referred to as the Bayesian INference in DAG (BIND), we show how inference can be performed in a Directed Acyclic Graph (DAG) using stochastic circuits. We show how these building blocks can be easily implemented using simple digital logic gates. An advantage of the stochastic electronic implementation is that it is robust to certain types of noise, which may become an issue in integrated circuit (IC) technology with feature sizes in the order of tens of nanometers due to their low noise margin, the effect of high-energy cosmic rays and the low supply voltage. In our framework, the flipping of random individual bits would not affect the system performance because information is encoded in a bit stream.
Realization of Mathematical Function by Stochastic Logic Using FPGA
Stochastic computing (SC) is a complete combination of analog and digital concept. Basically, stochastic logic computes on analog which is operated and converted using digital bitstream. This computing aspect exploits the advantages such as low power consumption, less delays, reduce hardware platform, throughput, high efficiency. This paper described stochastic logic operation with application in digital signal processing field for finite impulse response filter. Here paper makes two contributions. Firstly it is shown that operation of FIR filter using stochastic logic computation. Second, compares the output result of conventional and stochastic FIR filter. So this comparison verified the efficient computing method along with various parameters which are most vital in any designing in VLSI field. The whole stochastic logic design is coded through VHDL (Hardware Descriptive Language). The output results for both filters were simulated, compiled and synthesized by using ModelSim-Altera 10.3C and Quartus 2 9.1sp2 software.
CMOS + stochastic nanomagnets: heterogeneous computers for probabilistic inference and learning
arXiv (Cornell University), 2023
Extending Moore's law by augmenting complementary-metal-oxide semiconductor (CMOS) transistors with emerging nanotechnologies (X) has become increasingly important. One important class of problems involve sampling-based Monte Carlo algorithms used in probabilistic machine learning, optimization, and quantum simulation. Here, we combine stochastic magnetic tunnel junction (sMTJ)-based probabilistic bits (p-bits) with Field Programmable Gate Arrays (FPGA) to create an energy-efficient CMOS + X (X = sMTJ) prototype. This setup shows how asynchronously driven CMOS circuits controlled by sMTJs can perform probabilistic inference and learning by leveraging the algorithmic update-order-invariance of Gibbs sampling. We show how the stochasticity of sMTJs can augment low-quality random number generators (RNG). Detailed transistor-level comparisons reveal that sMTJ-based p-bits can replace up to 10,000 CMOS transistors while dissipating two orders of magnitude less energy. Integrated versions of our approach can advance probabilistic computing involving deep Boltzmann machines and other energy-based learning algorithms with extremely high throughput and energy efficiency.
A Memristor-Based Bayesian Machine
ArXiv, 2021
In recent years, a considerable research effort has shown the energy benefits of implementing neural networks with memristors or other emerging memory technologies. However, for extreme-edge applications with high uncertainty, access to reduced amounts of data, and where explainable decisions are required, neural networks may not provide an acceptable form of intelligence. Bayesian reasoning can solve these concerns, but it is computationally expensive and, unlike neural networks, does not translate naturally to memristor-based architectures. In this work, we introduce, demonstrate experimentally on a fully fabricated hybrid CMOS-memristor system, and analyze a Bayesian machine designed for highly-energy efficient Bayesian reasoning. The architecture of the machine is obtained by writing Bayes’ law in a way making its implementation natural by the principles of distributed memory and stochastic computing, allowing the circuit to function using solely local memory and minimal data mo...
Bayesian Sensor Fusion with Fast and Low Power Stochastic Circuits
As the physical limits of Moore's law are being reached, a research effort is launched to achieve further performance improvements by exploring computation paradigms departing from standard approaches. The BAMBI project (Bottom-up Approaches to Machines dedicated to Bayesian Inference) aims at developing hardware dedicated to probabilistic computation, which extends logic computation realised by boolean gates in current computer chips. Such probabilistic computing devices would allow to solve faster and at a lower energy cost a wide range of Artificial Intelligence applications, especially when decisions need to be taken from incomplete data in an uncertain environment. This paper describes an architecture where very simple operators compute on a time coding of probability values as stochastic signals. Simulation tests and a reconfigurable logic hardware implementation demonstrated the feasibility and performances of the proposed inference machine. Hardware results show this architecture can quickly solve Bayesian sensor fusion problems and is very efficient in terms of energy consumption.