The achievable region approach to the optimal control of stochastic systems (original) (raw)
A relaxation technique to ensure feasibility in stochastic control with input and state constraints
arXiv: Optimization and Control, 2016
We consider a stochastic linear system and address the design of a finite horizon control policy that is optimal according to some average cost criterion and accounts also for probabilistic constraints on both the input and state variables. This finite horizon control problem formulation is quite common in the literature and has potential for being implemented in a receding horizon fashion according to the model predictive control strategy. Such a possibility, however, is hampered by the fact that, if the disturbance has unbounded support, a feasibility issue may arise. In this paper, we address this issue by introducing a constraint relaxation that is effective only when the original problem turns out to be unfeasible and, in that case, recovers feasibility as quickly as possible. This is obtained via a cascade of two probabilistically-constrained optimization problems, which are solved here through a computationally tractable scenario-based scheme, providing an approximate solutio...
Stochastic constrained control: Trading performance for state constraint feasibility
2013 European Control Conference (ECC), 2013
In this paper, we address finite-horizon control for a stochastic linear system subject to constraints on the control and state variables. A control design methodology is proposed where the appropriate trade-off between the minimization of the control cost (performance) and the satisfaction of the state constraints (safety) can be decided by introducing appropriate chance-constrained problems depending on some parameter to be tuned. From an algorithmic viewpoint, a computationally tractable randomized approach to find approximate solutions which are guaranteed to be feasible for the original chanceconstrained problem is proposed. A numerical example concludes the paper.
Optimal control of ultimately bounded stochastic processes
Nagoya Mathematical Journal, 1974
We shall consider the optimal control for a system governed by a stochastic differential equationwhere u(t, x) is an admissible control and W(t) is a standard Wiener process. By an optimal control we mean a control which minimizes the cost and in addition makes the corresponding Markov process stable.
Stochastic control with input and state constraints: A relaxation technique to ensure feasibility
2015 54th IEEE Conference on Decision and Control (CDC), 2015
We consider the problem of designing a finitehorizon control policy for a stochastic linear system subject to probabilistic constraints on both input and state variables. When the disturbance has unbounded support, a feasibility issue may arise due to the presence of the state constraint. In this paper, we address this issue by introducing a suitable relaxation of the original problem that ensures feasibility. The relaxation is such that the original state constraint is enforced whenever is possible; otherwise, the control that pushes the state closest to the constraint is chosen. This involves formulating a cascade of two chance-constrained optimization problems, which are tackled through a scenario-based randomized scheme expressly tailored to the problem at hand. The theoretical properties of the obtained solution are investigated and it is shown that randomization allows one to achieve computational tractability. The proposed approach finds immediate application to stochastic model predictive control.
A New Approach to Solving Stochastic Optimal Control Problems
Mathematics
A conventional approach to solving stochastic optimal control problems with time-dependent uncertainties involves the use of the stochastic maximum principle (SMP) technique. For large-scale problems, however, such an algorithm frequently leads to convergence complexities when solving the two-point boundary value problem resulting from the optimality conditions. An alternative approach consists of using continuous random variables to capture uncertainty through sampling-based methods embedded within an optimization strategy for the decision variables; such a technique may also fail due to the computational intensity involved in excessive model calculations for evaluating the objective function and its derivatives for each sample. This paper presents a new approach to solving stochastic optimal control problems with time-dependent uncertainties based on BONUS (Better Optimization algorithm for Nonlinear Uncertain Systems). The BONUS has been used successfully for non-linear programmi...
RePEc: Research Papers in Economics, 2011
In this contribution we propose an approach to solve a multistage stochastic programming problem which allows us to obtain a time and nodal decomposition of the original problem. This double decomposition is achieved applying a discrete time optimal control formulation to the original stochastic programming problem in arborescent form. Combining the arborescent formulation of the problem with the point of view of the optimal control theory naturally gives as a first result the time decomposability of the optimality conditions, which can be organized according to the terminology and structure of a discrete time optimal control problem into the systems of equation for the state and adjoint variables dynamics and the optimality conditions for the generalized Hamiltonian. Moreover these conditions, due to the arborescent formulation of the stochastic programming problem, further decompose with respect to the nodes in the event tree. The optimal solution is obtained by solving small decomposed subproblems and using a mean valued fixed-point iterative scheme to combine them. To enhance the convergence we suggest an optimization step where the weights are chosen in an optimal way at each iteration.
Stochastic Optimal Control Subject to Ambiguity
Proceedings of the 18th IFAC World Congress, 2011
The aim of this paper is to address optimality of control strategies for stochastic control systems subject to uncertainty and ambiguity. Uncertainty corresponds to the case when the true dynamics and the nominal dynamics are different but they are defined on the same state space. Ambiguity corresponds to the case when the true dynamics are defined on a higher dimensional state space than the nominal dynamics. The paper is motivated by a brief summary of existing methods dealing with optimality of stochastic systems subject to uncertainty, and a discussion on its shortcoming when stochastic systems are ambiguous. The issues which will be discussed are the following. 1) Modeling methods for ambiguous stochastic systems, 2) formulation of optimal stochastic control systems subject to ambiguity, 3) optimality criteria for ambiguous stochastic control systems.
New Approach to Stochastic Optimal Control
Journal of Optimization Theory and Applications, 2007
This paper provides new insights into the solution of optimal stochastic control problems by means of a system of partial differential equations, which characterize directly the optimal control. This new system is obtained by the application of the stochastic maximum principle at every initial condition, assuming that the optimal controls are smooth enough. The type of problems considered are those where the diffusion coefficient is independent of the control variables, which are supposed to be interior to the control region. Keywords Optimal stochastic control • Itô's formula • Hamilton-Jacobi-Bellman equation • Semilinear parabolic equation 1 Introduction Three major approaches in stochastic optimal control can be differentiated: dynamic programming, duality and the maximum principle. Dynamic programming obtains, by means of the optimality principle of Bellman, the Hamilton-Jacobi-Bellman equation, which characterizes the value function; see [1-5]. Under some smoothness and regularity assumptions on the solution, it is pos-Communicated by H J Pesch Two referees provided useful suggestions Both authors gratefully acknowledge financial support from the regional Government of Castilla y León (Spain) under Project VA099/04, the Spanish Ministry of Education and Science and FEDER funds under Project MTM2005-06534
On Optimal Control of Stochastic Linear Hybrid Systems
Lecture Notes in Computer Science, 2016
Cyber-physical systems are often hybrid consisting of both discrete and continuous subsystems. The continuous dynamics in cyberphysical systems could be noisy and the environment in which these stochastic hybrid systems operate can also be uncertain. We focus on multimodal hybrid systems in which the switching from one mode to another is determined by a schedule and the optimal finite horizon control problem is to discover the switching schedule as well as the control inputs to be applied in each mode such that some cost metric is minimized over the given horizon. We consider discrete-time control in this paper. We present a two step approach to solve this problem with respect to convex cost objectives and probabilistic safety properties. Our approach uses a combination of sample average approximation and convex programming. We demonstrate the effectiveness of our approach on case studies from temperature-control in buildings and motion planning.