Unconventional computing for Bayesian inference (original) (raw)

International Journal of Approximate Reasoning, 2017

Abstract

This special issue focuses on recent advances and future directions on unconventional computational solutions to perform Bayesian inference. It is a followup of a workshop held within the context of a major robotics conference, the IROS 2015 (IEEE/RSJ International Conference on Intelligent Robots and Systems 2015) workshop on Unconventional Computing for Bayesian Inference (UCBI2015),1 addressing Bayesian inference for autonomous robots, insights from computational biology, and related topics. The special issue includes novel contributions beyond the scope of robotics. Contemporary robots and other cognitive artifacts are not yet ready to autonomously operate in complex environments. The major reason for this failure is the lack of cognitive systems able to efficiently deal with uncertainty when behaving in real world situations. One of the challenges of robotics is endowing devices with adequate computational power to dwell in uncertainty and decide with incomplete data, with limited resources and power, as biological beings have done for a long time. In this context, Bayesian approaches have been used to deal with incompleteness and uncertainty, exhibiting promising results. However, all these works, even if they propose probabilistic models, still rely on a classical computing paradigm that imposes a bottleneck on performance and scalability. Improved and novel electronic devices have opened the spectrum of devices available for computation, such as GPUs, FPGAs, hybrid systems, allowing unconventional approaches to better explore parallelisation and tackle power consumption. The flexibility of current reprogrammable logic devices provides a test bed for novel stochastic processors and unconventional computing. In fact, Bayesian inference has by now found its way into all branches of science, including natural sciences (e.g. [1–8]), humanities and social sciences (e.g. [9–13]), and also applied sciences, such as medicine (e.g. [14,15]). It has become so pervasive that even the scientific method has been occasionally interpreted as an application of Bayesian inference [16]. In most cases, Bayesian inference has been used as an alternative to the more “traditional” frequentist inference methods for statistical analysis of data. On the flip side, for several reasons (in most cases, of technical, computational, or even epistemological nature), the application of Bayesian inference to model cognition and reasoning – either to study the animal brain, such as in neuroscience, or to synthesise artificial brains, such as in artificial intelligence (AI) or robotics – was not as common for many years. However, in the past couple of decades, as confirmed, for example, by Chater, Tenenbaum, and Yuille [17], the Bayesian approach has become very much ubiquitous. In AI, you can find numerous recent examples of Bayesian approach popularity in literature such as [18–20]. Robotics, as a consequence, follows suit, as can be seen in [21–24], as well as the interest of the robotics community in the UCBI workshop. The application of the Bayesian approach to modelling consists in establishing a joint distribution describing a generative model, therefore encoding the probability of any set of data D being generated from any given hypothesis H . Bayesian inference is therefore the process of determining information based on the conditional probabilities of any hypothesis given the available data by applying Bayes’ rule. Information, in this context, can mean the exact values or estimates of (1) the probability of a specific hypothesis or set of hypotheses [H = h] given a specific instantiation of the data [D = d], expressed as P (h|d); (2) the full posterior distribution, P (H | d); or it may also mean (3) a probabilistic decision based on the posterior distribution in favour of specific hypotheses, fh(P (H | d)). We can therefore define Bayesian computation in generic terms as the act of executing Bayesian inference. If H and D are continuous random variables, there are in general no closed-form expressions for computing inference (with the notable exception of Kalman filters) If, on the other hand, the generative model is based on discrete random variables, although inference is computable, it is frequently intractable due to the so-called curse of dimensionality caused by the cardinality of the space of hypotheses H. Moreover, determining the actual computational expression for inference in the discrete case has been

João Filipe Ferreira hasn't uploaded this paper.

Let João Filipe know you want this paper to be uploaded.

Ask for this paper to be uploaded.