Fully Probabilistic Design for Optimal Transport (original) (raw)
Related papers
Optimization Models with Probabilistic Constraints
Probabilistic and Randomized Methods for Design under Uncertainty
This chapter presents an overview of the theory and numerical techniques for optimization models involving one or more constraints on probability functions. We focus on recent developments involving nonlinear probabilistic models. The theoretical fundament includes the theory and examples of generalized concavity for functions and measures, and some specific properties of probability distributions, including discrete distributions. We analyze the structure and properties of the constraining probabilistic functions and of the probabilistically constrained sets. An important part of the analysis is the development of algebraic constraints equivalent to the probabilistic ones. Optimality and duality theory for such models is presented. In the overview of numerical methods for solving probabilistic optimization problems the emphasis is put on recent numerical methods for nonlinear probabilistically constrained problems based on the optimality and duality theory presented here. The methods provide optimal solutions for convex problems. Otherwise, they solve certain relaxations of the problem and result in suboptimal solutions and upper and lower bounds for the optimal value. Special attention is paid to probabilistic constraints with discrete distributions. Some numerical approaches via statistical approximations are discussed as well. Numerical techniques of bounding probability in higher dimensional spaces with satisfactory precision are mentioned briefly in the context of discrete distributions. Application of combinatorial techniques in this context is sketched.
Reasoning with Uncertainty in Continuous Domains
Continuous constraint programming has been widely used to model safe reasoning in applications where uncertainty arises. Constraint propagation propagates intervals of uncertainty among the variables of the problem, eliminating values that do not belong to any solution. However, to play safe, these intervals may be very wide and lead to poor propagation. We proposed a probabilistic continuous constraint framework that associates a probabilistic space to the variables of the problem, allowing to distinguish between different scenarios, based on their likelihoods. In this paper we discuss the capabilities of the framework for decision support in nonlinear continuous problems with uncertain information. Its applicability is illustrated in inverse and reliability problems, which are two different types of problems representative of the kind of reasoning required by the decision makers.
Axiomatisation of fully probabilistic design
Information Sciences, 2012
This text provides background of fully probabilistic design (FPD) of decision-making strategies and shows that it is a proper extension of the standard Bayesian decision making. FPD essentially minimises Kullback-Leibler divergence of closed-loop model on its ideal counterpart. The inspection of the background is important as the current motivation for FPD is mostly heuristic one, while the technical development of FPD confirms its far reaching possibilities. FPD unifies and simplifies subtasks and elements of decision making under uncertainty. For instance, (i) both system model and decision preferences are expressed in common probabilistic language; (ii) optimisation is simplified due to existence of explicit minimiser in stochastic dynamic programming; (iii) DM methodology for single and multiple aims is unified; (iv) a way is open to completion and sharing non-probabilistic and probabilistic knowledge and preferences met in knowledge and preference elicitation as well as unsupervised cooperation of decision makers.
4. Optimization Models with Probabilistic Constraints
Society for Industrial and Applied Mathematics eBooks, 2009
This chapter presents an overview of the theory and numerical techniques for optimization models involving one or more constraints on probability functions. We focus on recent developments involving nonlinear probabilistic models. The theoretical fundament includes the theory and examples of generalized concavity for functions and measures, and some specific properties of probability distributions, including discrete distributions. We analyze the structure and properties of the constraining probabilistic functions and of the probabilistically constrained sets. An important part of the analysis is the development of algebraic constraints equivalent to the probabilistic ones. Optimality and duality theory for such models is presented. In the overview of numerical methods for solving probabilistic optimization problems the emphasis is put on recent numerical methods for nonlinear probabilistically constrained problems based on the optimality and duality theory presented here. The methods provide optimal solutions for convex problems. Otherwise, they solve certain relaxations of the problem and result in suboptimal solutions and upper and lower bounds for the optimal value. Special attention is paid to probabilistic constraints with discrete distributions. Some numerical approaches via statistical approximations are discussed as well. Numerical techniques of bounding probability in higher dimensional spaces with satisfactory precision are mentioned briefly in the context of discrete distributions. Application of combinatorial techniques in this context is sketched.
Probabilistic Reasoning with Continuous Constraints
2007
Continuous constraint reasoning assumes the uncertainty of numerical variables within given bounds and propagates such knowledge through a network of constraints, reducing the uncertainty. In some problems there is also information about the plausibility distribution of values within such bounds. However, the classical constraint framework cannot accommodate that information. This paper describes how the continuous constraint programming paradigm may be extended, in order to accommodate some probabilistic considerations, bridging the gap between the pure interval-based approach, that does not consider likelihoods, and the pure stochastic approach, that does not guarantee the safety of the obtained results.
The Geometry of Uncertainty - The Geometry of Imprecise Probabilities
Artificial Intelligence: Foundations, Theory, and Algorithms, Springer Nature, 2011
The principal aim of this book is to introduce to the widest possible audience an original view of belief calculus and uncertainty theory. In this geometric approach to uncertainty, uncertainty measures can be seen as points of a suitably complex geometric space, and manipulated in that space, for example, combined or conditioned. In the chapters in Part I, Theories of Uncertainty, the author offers an extensive recapitulation of the state of the art in the mathematics of uncertainty. This part of the book contains the most comprehensive summary to date of the whole of belief theory, with Chap. 4 outlining for the first time, and in a logical order, all the steps of the reasoning chain associated with modelling uncertainty using belief functions, in an attempt to provide a self-contained manual for the working scientist. In addition, the book proposes in Chap. 5 what is possibly the most detailed compendium available of all theories of uncertainty. Part II, The Geometry of Uncertainty, is the core of this book, as it introduces the author’s own geometric approach to uncertainty theory, starting with the geometry of belief functions: Chap. 7 studies the geometry of the space of belief functions, or belief space, both in terms of a simplex and in terms of its recursive bundle structure; Chap. 8 extends the analysis to Dempster’s rule of combination, introducing the notion of a conditional subspace and outlining a simple geometric construction for Dempster’s sum; Chap. 9 delves into the combinatorial properties of plausibility and commonality functions, as equivalent representations of the evidence carried by a belief function; then Chap. 10 starts extending the applicability of the geometric approach to other uncertainty measures, focusing in particular on possibility measures (consonant belief functions) and the related notion of a consistent belief function. The chapters in Part III, Geometric Interplays, are concerned with the interplay of uncertainty measures of different kinds, and the geometry of their relationship, with a particular focus on the approximation problem. Part IV, Geometric Reasoning, examines the application of the geometric approach to the various elements of the reasoning chain illustrated in Chap. 4, in particular conditioning and decision making. Part V concludes the book by outlining a future, complete statistical theory of random sets, future extensions of the geometric approach, and identifying high-impact applications to climate change, machine learning and artificial intelligence. The book is suitable for researchers in artificial intelligence, statistics, and applied science engaged with theories of uncertainty. The book is supported with the most comprehensive bibliography on belief and uncertainty theory.
Special Issue: On the interface between optimization and probability
Mathematical Programming, 2020
Probability theory and the theory of optimization jointly form the theoretical basis of several other fields of research. Important examples are stochastic optimization and stochastic control, the theory of risk measures, stochastic optimization algorithms, and optimal decisions in statistics. This special issue is devoted to state-of-the-art contributions that combine optimization and probability theory in an innovative way. In particular, since the notion of risk is inherently probabilistic, controlling risk is important for all kinds of stochastic optimization approaches, and the theory of risk measures is deeply rooted in optimization, this issue puts an emphasis on the measurement and management of risk in a variety of contexts. A second aim of this volume consists in honoring Prof. Georg Ch. Pflug, who retired in September 2019 after a long and successful academic career, mainly at the University of Vienna, Austria and formerly at the University of Gießen, Germany. As an eminent scientist, he contributed over many years with remarkable papers to the main topics of our special issue. His achievements include five scientific monographs, eight edited books, and more than hundred scientific articles that made a lasting impact on the theory of stochastic optimization, various fields of statistics, and risk measures-the latter probably being the area in which he made his most significant contributions.
A general model to handle uncertainty, based on coherent conditional probabilities
Any time we draw a partial or definite conclusion or we take a decision, in a frame of partial and revisable knowledge, we put into action a complex reasoning process. In this process we take into account our (qualitative and quantitative)information, inference rules by means we extend the information to some object in which direct information is not available. This kind of reasoning, which is clearly non monotonic is in fact at the basis of the processing of scientific theories, but also in many decision process, in which the decision maker has a partial knowledge of the involved objects and moreover it is expressed in different (some time) informal languages. During the last two decades, for handling the reasoning above described, Romano Scozzafava and I, with the important contribute of some our pupils (in particular Barbara Vantaggi), developed a model of generalized inference, based on the theory of coherent conditional probabilities (essentially introduced by de Finetti) and the relevant extension problems. This innovatory paradigm relies on many theoretical tools that are fit to manage the uncertainty caused by different circumstances but reducible in fact to a lack of complete information. Moreover it is able to handle both domains without particular Boolean structure with partial qualitative or comparative assessments, and the general knowledge acquisition process, which is in fact a dynamic enlargement both of the domain and of the qualitative or quantitative assessment of the degree of belief. It provides, directly and indirectly a frame suitable for a unitary handling of many theories present in the literature, both directly and indirectly:-directly: some of uncertainty or information measures (possibilities, belief functions, generalized measure of information, fuzzy set theory) can be re-read as particular coherent conditional probabilities and some inferential rules for non-monotonic logic (such as default logic etc.) can be regarded as an extension process of particular assessments of coherent conditional probabilities;-indirectly: for general (non additive) measures of uncertainty or of information , the method of a direct approach to the conditioning and the general point of view regarding inference can be used to solve many problems related to conditioning itself, and to the related concept of independence and relevant graphical structures. The theory is based on the most general concept of event and conditional event, in a direct introduction of conditional probability as a function defined on conditional events satisying a set of axioms. An event is any fact singled-out by a (nonambiguous) statement E, that is a (Boolean) proposition that can be either true or false (corresponding to the two values 1 or 0 of the indicator I E of E). A conditional event is any ordered pair of events (E|H), with H = ∅ which can be both true or false and are different only in the played role (H plays the role of hypothesis). It is in fact essential to regard also the conditioning
Probabilistic Continuous Constraint Satisfaction Problems
2008
Constraint programming has been used in many applications where uncertainty arises to model safe reasoning. The goal of constraint propagation is to propagate intervals of uncertainty among the variables of the problem, thus only eliminating values that assuredly do not belong to any solution. However, to play safe, these intervals may be very wide and lead to poor propagation. In this paper we present a framework for probabilistic constraint solving that assumes that uncertain values are not all equally likely. Hence, in addition to initial intervals, a priori probability distributions (within these intervals) are defined and propagated through the constraints. This provides a posteriori conditional probabilities for the variables values, thus enabling the user to select the most likely scenarios.