An efficient numerical solution for time switching optimal control problems (original) (raw)
Related papers
Optimal control problems with switching points
1991
The main idea of this report is to give an overview of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion •of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated
Proceedings of the 44th IEEE Conference on Decision and Control
We consider bang-bang control problems with state inequality constraints. It is shown that the control problem induces an optimization problem where the optimization vector consists of all switching times of the bang-bang control and junction times with boundary arcs. The induced optimization problem is a generalization of the one studied in [1], [19], [20], [22] for bang-bang controls without state constraints. We develop second order sufficient conditions (SSC) for the state-constrained control problem which require that (1) the SSC for the induced optimization problem are satisfied and (2) additional conditions for the switching function hold at switching and junction times. An optimization algorithm is presented which simultaneously carries out the second-order test. The algorithm is illustrated on a numerical example in cancer chemotherapy.
Optimal Control of Switched Systems Based on Parameterization of the Switching Instants
IEEE Transactions on Automatic Control, 2004
This paper presents a new approach for solving optimal control problems for switched systems. We focus on problems in which a prespecified sequence of active subsystems is given. For such problems, we need to seek both the optimal switching instants and the optimal continuous inputs. In order to search for the optimal switching instants, the derivatives of the optimal cost with respect to the switching instants need to be known. The most important contribution of the paper is a method which first transcribes an optimal control problem into an equivalent problem parameterized by the switching instants and then obtains the values of the derivatives based on the solution of a two point boundary value differential algebraic equation formed by the state, costate, stationarity equations, the boundary and continuity conditions, along with their differentiations. This method is applied to general switched linear quadratic problems and an efficient method based on the solution of an initial value ordinary differential equation is developed. An extension of the method is also applied to problems with internally forced switching. Examples are shown to illustrate the results in the paper.
Optimal Control Applications and Methods, 2005
It has been common practice to find controls satisfying only necessary conditions for optimality, and then to use these controls assuming that they are (locally) optimal. However, sufficient conditions need to be used to ascertain that the control rule is optimal. Second order sufficient conditions (SSC) which have recently been derived by Agrachev, Stefani, and Zezza, and by Maurer and Osmolovskii, are a special form of sufficient conditions which are particularly suited for numerical verification. In this paper we present optimization methods and describe a numerical scheme for finding optimal bang–bang controls and verifying SSC. A straightforward transformation of the bang–bang arc durations allows one to use standard optimal control software to find the optimal arc durations as well as to check SSC. The proposed computational verification technique is illustrated on three example applications. Copyright © 2005 John Wiley & Sons, Ltd.
Springer Optimization and Its Applications, 2016
We survey the results on no-gap second order optimality conditions (both necessary and sufficient) in the Calculus of Variations and Optimal Control, that were obtained in the monographs [31] and [40], and discuss their further development. First, we formulate such conditions for broken extremals in the simplest problem of the Calculus of Variations and then, we consider them for discontinuous controls in optimal control problems with endpoint and mixed state-control constraints, considered on a variable time interval. Further, we discuss such conditions for bangbang controls in optimal control problems, where the control appears linearly in the Pontryagin-Hamilton function with control constraints given in the form of a convex polyhedron. Bang-bang controls induce an optimization problem with respect to the switching times of the control, the so-called Induced Optimization Problem. We show that second-order sufficient condition for the Induced Optimization Problem together with the so-called strict bang-bang property ensure second-order sufficient conditions for the bang-bang control problem. Finally, we discuss optimal control problems with mixed control-state constraints and control appearing linearly. Taking the mixed constraint as a new control variable we convert such problems to bang-bang control problems. The numerical verification of second-order conditions is illustrated on three examples.
QUADRATIC OPTIMAL CONTROL PROBLEMS WITH SHOOTING METHOD
OCP with shooting method, 2022
Control theory is an area of applied mathematics that deals with principles,laws, and desire of dynamic systems. Optimal control problems are generalized form of variation problems. A very important tool in variational calculus is the notion of Gateaux-differentiability. It is the basis of the development of necessary optimality conditions. ELDE is a necessary optimality condition to solve variational problems. The solution of ELDE is an extremal function of a variational problem. Characterizing theorem of convex optimization is the necessary and sufficient condition of many convex problems. The necessary optimality conditions for (x∗(t), u∗(t)) to be extremal solutions of optimal control problem is the validity of :- Pontryagin minimum principle, of ELDE with TR, and ODE conditions. To determine whether the extremals are optimal solutions of OCP or not; sufficient optimality conditions are required; (e.g checking the convexity of the objective functional and the convexity of the feasible set). QOCP is a non linear optimization where the cost function is quadratic but the differential equation is linear. In quadratic control problem since the objective function is convex then the extremals are the optimal solution of the problem. Linear QOCP is an important type of quadratic control problem that simples the work of feed back control system. OCP can be solved by different methods depending on the type of the problem. This paper mainly considers solving QOCP by using the method of lagrange multiplier and shooting method.
Optimal open loop control of dynamic systems using a ?min?max? Hamiltonian method
Forschung im Ingenieurwesen, 2004
A new method for the computation of optimal ZDT open loop control of dynamic systems is presented. The Zo new optimization method is derived from Pontryagin's ZDT Maximum Principle (PMP) and is formulated as a "min-hj max" Hamiltonian problem. The method can be applied to T N-DOF dynamic systems without control limits (so called k u-limits) and succeeds, despite of this, in achieving an P optimal "bang-bang" type solution. The method is suitable E for engineers, easy to apply and is characterized by an ml> m2 excellent performance and low CPU times. Numerical Xl(t), X2(t) examples illustrate the proposed methodology as well as its kl> k2. computational advantages. WI>W2 '1> '2 U1> U2' k3NL final position initial state final state Heaviside excitation time period iteration step power relaxation factor: masses DOFs spring constants eigenvalues eigenperiods variables nonlinear spring OptimaleSteuerung dynamischer Systeme mit Hilfeeiner"min-max"Hamiltonschen Methode Zusammenfassung Eine neue Methode fUr die optimale Steuerung dynamischer Systeme wird vorgestellt. Die neue Methode ist abgeleitet von dem Maximum Prinzip von Pontryagin und wird als ein "min-max" Hamiltonsches Problem formuliert. Die Methode kann an dynamischen Systemen mit N Freiheitsgraden ohne Steuerungsbegrenzung (u-Grenzen) angewandt werden und fUhrt, trotzdem zu einer optimalen "bang-bang" Steuerung. Die Methode ist geeignet fUr Ingenieure, einfach anzuwenden und ist gekennzeichnet durch Leistungsfahigkeit und kleine CPU-Zeiten. Numerische Beispiele illustrieren die Anwendung der vorgeschlagenen Methode und ihre rechnerischen Vorteile.
An iterative method for time optimal control of dynamic systems
Archives of Control Sciences, 2000
An iterative method for time optimal control of a general type of dynamic systems is proposed, subject to limited control inputs. This method uses the indirect solution of open-loop optimal control problem. The necessary conditions for optimality are derived from Pontryagin's minimum principle and the obtained equations lead to a nonlinear two point boundary value problem (TPBVP). Since there are many difficulties in finding the switching points and in solving the resulted TPBVP, a simple iterative method based on solving the minimum energy solution is proposed. The method does not need finding the switching point so that the resulted TPBVP can be solved by usual algorithms such as shooting and collocation. Also, since the solution of TPBVPs is sensitive to initial guess, a short procedure for making the proper initial guess is introduced. To this end, the accuracy and efficiency of the proposed method is demonstrated using time optimal solution of some systems: harmonic oscillator, robotic arm, double spring-mass problem with coulomb friction and F-8 aircraft.