The vanishing approach for the average continuous control of piecewise deterministic Markov processes (original) (raw)
2008, Decision and Control, 2008. CDC 2008 …
Abstract
The main goal of this paper is to derive sufficient conditions for the existence of an optimal control strategy for the long run average continuous control problem of piecewise deterministic Markov processes (PDMP's) taking values in a general Borel space and with compact action space depending on the state variable. In order to do that we apply the so-called vanishing discount approach (see [16], page 83) to obtain a solution to an average cost optimality inequality associated to the long run average cost problem. Our main assumptions are written in terms of some integro-differential inequalities related to the so-called expected growth condition, and geometric convergence of the post-jump location kernel associated to the PDMP.
Loading Preview
Sorry, preview is currently unavailable. You can download the paper by clicking the button above.
References (22)
- A. Almudevar. A dynamic programming algorithm for the optimal control of piecdewise deter- ministic Markov processes. SIAM J. of Control and Optim., 40(2):525-539, 2001.
- D.P. Bertsekas and S.E. Shreve. Stochastic optimal control, volume 139 of Mathematics in Science and Engineering. Academic Press Inc., New York, 1978. The discrete time case.
- O.L.V. Costa. Average impulse control of piecewise deterministic processes. IMA J. Math. Control Inform., 6(4):375-397, 1989.
- O.L.V. Costa and F. Dufour. Average control of piecewise deterministic Markov processes. ArXiv, 0809.0477v1, page 34, 2008. Available at http://arxiv.org/abs/0809.0477.
- O.L.V. Costa and F. Dufour. Relaxed long run average continuous control of piecewise deter- ministic markov processes. In Proceedings of the European Control Conference, pages 5052-5059, Kos, Greece, July, 2007.
- M.H.A. Davis. Piecewise-deterministic Markov processes: A general class of non-diffusion stochas- tic models. J.Royal Statistical Soc. (B), 46:353-388, 1984.
- M.H.A. Davis. Control of piecewise-deterministic processes via discrete-time dynamic program- ming. In Stochastic differential systems (Bad Honnef, 1985), volume 78 of Lecture Notes in Control and Inform. Sci., pages 140-150. Springer, Berlin, 1986.
- M.H.A. Davis. Markov Models and Optimization. Chapman and Hall, London, 1993.
- M.A.H. Dempster and J.J. Ye. Necessary and sufficient optimality conditions for control of piecewise deterministic processes. Stochastic and Stochastics Reports, 40:125-145, 1992.
- M.A.H. Dempster and J.J. Ye. Generalized Bellman-Hamilton-Jacob optimality conditions for a control problem with boundary conditions. Appl. Math. Optimization, 33:211-225, 1996.
- E.B. Dynkin and A.A. Yushkevich. Controlled Markov processes, volume 235 of Grundlehren der Mathematischen Wissenschaften. Springer-Verlag, Berlin, 1979.
- L. Forwick, M. Schäl, and M. Schmitz. Piecewise deterministic Markov control processes with feedback controls and unbounded costs. Acta Appl. Math., 82(3):239-267, 2004.
- D. Gatarek. Impulsive control of piecewise-deterministic processes with long run average cost. Stochastics Stochastics Rep., 45(3-4):127-143, 1993.
- X. Guo and U. Rieder. Average optimality for continuous-time Markov decision processes in polish spaces. The Annals of Applied Probability, 16:730-756, 2006.
- X. Guo and Q. Zhu. Average optimality for Markov decision processes in Borel spaces: A new condition and approach. Journal of Applied Probability, 43:318-334, 2006.
- O. Hernández-Lerma and J.B. Lasserre. Discrete-time Markov control processes, volume 30 of Applications of Mathematics. Springer-Verlag, New York, 1996. Basic optimality criteria.
- O. Hernández-Lerma and J.B. Lasserre. Further topics on discrete-time Markov control processes, volume 42 of Applications of Mathematics. Springer-Verlag, New York, 1999.
- M. Schäl. On piecewise deterministic Markov control processes: control of jumps and of risk processes in insurance. Insurance Math. Econom., 22(1):75-91, 1998.
- D.V. Widder. The Laplace Transform. Princeton Mathematical Series, v. 6. Princeton University Press, Princeton, N. J., 1941.
- A.A. Yushkevich. Bellman inequalities in Markov decision deterministic drift processes. Stochas- tics, 23:235-274, 1987.
- A.A. Yushkevich. Verification theorems for Markov decision processes with controlled determin- istic drift and gradual and impulsive controls. Theory Probab. Appl., 34(3):474-496, 1989.
- Q. Zhu. Average optimality for continuous-time Markov decision processes with a policy iteration approach. Journal of Mathematical Analysis and Applications, 339:691-704, 2008.