Stochastic Computing: An Introduction (original) (raw)

Survey of Stochastic Computing

Stochastic computing (SC) was proposed in the 1960s as a low-cost alternative to conventional binary computing. It is unique in that it represents and processes information in the form of digitized probabilities. SC employs very low-complexity arithmetic units which was a primary design concern in the past. Despite this advantage and also its inherent error tolerance, SC was seen as impractical because of very long computation times and relatively low accuracy. However, current technology trends tend to increase uncertainty in circuit behavior, and imply a need to better understand, and perhaps exploit, probability in computation. This paper surveys SC from a modern perspective where the small size, error resilience, and probabilistic features of SC may compete successfully with conventional methodologies in certain applications. First, we survey the literature and review the key concepts of stochastic number representation and circuit structure. We then describe the design of SC-based circuits and evaluate their advantages and disadvantages. Finally, we give examples of the potential applications of SC, and discuss some practical problems that are yet to be solved.

Stochastic computation

Proceedings of the 47th Design Automation Conference on - DAC '10, 2010

Stochastic computation, as presented in this paper, exploits the statistical nature of application-level performance metrics, and matches it to the statistical attributes of the underlying device and circuit fabrics. Nanoscale circuit fabrics are viewed as noisy communication channels/networks. Communications-inspired design techniques based on estimation and detection theory are proposed. Stochastic computation advocates an explicit characterization and exploitation of error statistics at the architectural and system levels. This paper traces the roots of stochastic computing from the Von Neumann era into its current form. Design and CAD challenges are described.

An Overview of Time-Based Computing with Stochastic Constructs

IEEE Micro, 2017

Computing on time-based data is a recent evolution of research in stochastic computing (SC). As with SC, complex functions can be computed with low area cost, but the latency and energy efficiency are favorable compared to computations on conventional binary radix. This article reviews the design and implementation of arithmetic operations on time-encoded signals and discusses the advantages, challenges, and potential applications. S tochastic computing (SC), a paradigm first introduced by W.J. Poppelbaum 1 and Brian Gaines 2 in the 1960s, has received considerable attention in recent years, particularly after Weikang Qian and colleagues reintroduced the concept to the electronic design automation community. 3,4 It has since been explored as a potential paradigm for emerging technologies and "post-CMOS" computing. SC systems have very low area cost. This generally translates to low power consumption, making the paradigm interesting for ultra-low-power processing systems. In SC systems, logical computation is performed on random bitstreams called stochastic numbers (SNs). Two representations are used: • In the unipolar representation, each real valued number x (0 Յ x Յ 1) is represented by a sequence of random bits, each of which has probability x of being 1 and probability 1-x of being 0. • In the bipolar representation (-1 Յ x Յ 1), each bit in the stream has a probability (x ϩ 1)/2 of being 1 and 1-(x ϩ 1)/2 of being 0. For example, 10011, 10101, and 11100 are all SNs representing 0.60 in the unipolar and 0.2 in the bipolar representations. SC offers some intriguing advantages over conventional binary radix. Complex functions can be implemented with simple hardware. This enables the design of low-area and

A Monolithic Stochastic Computing Architecture for Energy and Area Efficient Arithmetic

2022

As the energy and hardware investments necessary for conventional high-precision digital computing continues to explode in the emerging era of artificial intelligence, deep learning, and Big-data [1-4], a change in paradigm that can trade precision for energy and resource efficiency is being sought for many computing applications. Stochastic computing (SC) is an attractive alternative since unlike digital computers, which require many logic gates and a high transistor volume to perform basic arithmetic operations such as addition, subtraction, multiplication, sorting etc., SC can implement the same using simple logic gates [5, 6]. While it is possible to accelerate SC using traditional silicon complementary metal oxide semiconductor (CMOS) [7, 8] technology, the need for extensive hardware investment to generate stochastic bits (s-bit), the fundamental computing primitive for SC, makes it less attractive. Memristor [9-11] and spin-based devices [12-15] offer natural randomness but d...

Enhancing Stochastic Computations via Process Variation

Stochastic computing has emerged as a computational paradigm that offers arithmetic operators with high-performance, compact implementations and robust to errors by producing approximate results. This work addresses two of the major limitations for its implementation which affects its accuracy: the correlation between stochastic bitstreams and the unobserved signal transitions. A novel implementation of stochastic arithmetic building-blocks is proposed to improve the quality of the results. It relies on Self-Timed Ring-Oscillators to produce different clock signals with different clock frequencies, by taking advantage of the influence of process variation in the timing of the logic elements on the FPGA. This work also presents an automated test platform for stochastic systems, which was used to evaluate the impact of the proposed enhancements. Tests were performed to compare both proposed and typical implementations, on reconfigurable devices with 28nm and 60nm fabrication processes. Finally, presented results demonstrate that the proposed architectures subjected to the impact of process variation improve the quality of the results.

Performing Stochastic Computation Deterministically

IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2019

Stochastic logic performs computation on data represented by random bit-streams. The representation allows complex arithmetic to be performed with very simple logic, but it suffers from high latency and poor precision. Furthermore, the results are always somewhat inaccurate due to random fluctuations. In this paper, we show that randomness is not a requirement for this computational paradigm. If properly structured, the same arithmetical constructs can operate on deterministic bit-streams, with the data represented uniformly by the fraction of 1's versus 0's. This paper presents three approaches for the computation: relatively prime stream lengths, rotation, and clock division. Unlike stochastic methods, all three of our deterministic methods produce completely accurate results. The cost of generating the deterministic streams is a small fraction of the cost of generating streams from random/pseudorandom sources. Most importantly, the latency is reduced by a factor of 1 2 n , where n is the equivalent number of bits of precision. When computing in unary, the bit stream length increases with each level of logic. This is an inevitable consequence of the representation, but it can result in unmanageable bit streams lengths. We discuss two methods for maintaining constant bit-streams lengths via approximations, based on lowdiscrepancy sequences. These methods provide the best accuracy and area×delay product. They are fast-converging and so offer progressive precision.

On Memory System Design for Stochastic Computing

IEEE Computer Architecture Letters, 2018

Growing uncertainty in design parameters (and therefore, in design functionality) renders stochastic computing particularly promising, which represents and processes data as quantized probabilities. However, due to the difference in data representation, integrating conventional memory (designed and optimized for non-stochastic computing) in stochastic computing systems inevitably incurs a significant data conversion overhead. Barely any stochastic computing proposal to-date covers the memory impact. In this paper, as the first study of its kind to the best of our knowledge, we rethink the memory system design for stochastic computing. The result is a seamless stochastic system, StochMem, which features analog memory to trade the energy and area overhead of data conversion for computation accuracy. In this manner StochMem can reduce the energy (area) overhead by up-to 52.8% (93.7%) at the cost of at most 0.7% loss in computation accuracy.

Stochastic hardware architectures: A survey

2012 International Conference on Energy Aware Computing, 2012

Many emerging computer applications may be classified into recognition, mining, and synthesis (RMS) applications, or into stream-based media applications. One interesting and useful property of such applications is that they are tolerant to errors. In fact, these applications allow discrepancies in intermediate computations, but nevertheless are able to provide "acceptable" results. Research in this area leveraged this error tolerance in order to relax the zero-error tolerance requirement at the hardware level, and to shift error correction or concealment to the software application level. The main advantage of using such stochastic hardware architectures is in the major energy savings that are obtained since the circuits can be operated at reduced power supply levels. The hardware errors may be due to different components in the computer system. The purpose of this paper is to conduct a survey on techniques used in the design of stochastic architectures.

Stochastic Arithmetic, theory and experiments

Stochastic arithmetic has been developed as a model for exact computing with imprecise data. Stochastic arithmetic provides confidence intervals for the numerical results and can be implemented in any existing numerical software by redefining types of the variables and overloading the operators on them. Here some properties of stochastic arithmetic are further investigated and applied to the computation of inner products and the solution to linear systems. Several numerical experiments are performed showing the efficiency of the proposed approach.

On the Theory of Stochastic Processors

2010 Seventh International Conference on the Quantitative Evaluation of Systems, 2010

Traditional architecture design approaches hide hardware uncertainties from the software stack through overdesign, which is often expensive in terms of power consumption. The recently proposed quantitative alternative of stochastic computing requires circuits and processors to be correct only probabilistically and use less power. In this paper, we present the first step towards a theory of stochastic computing. Specifically, a formal model of a device which computes a deterministic function with stochastic delays is presented; the semantics of a stochastic circuit is obtained by composing such devices; finally, a quantitative notion of stochastic correctness, called correctness factor (CF), is introduced. For random data sources, a closed form expression is derived for CF of devices, which shows that there are two probabilities that contribute positively, namely, the probability of being timely with current inputs and the probability of being lucky with past inputs. Finally, we show the characteristic graphs obtained from the analytical expressions for the variation of correctness factor with clock period, for several simple circuits and sources.