Steady-State Planning in Expected Reward Multichain MDPs (original) (raw)
Related papers
Verifiable Planning in Expected Reward Multichain MDPs
ArXiv, 2020
Steady-State Policy Synthesis in Multichain Markov Decision Processes
2020
Optimal control of MDPs with temporal logic constraints
52nd IEEE Conference on Decision and Control, 2013
Properties of Planning with Non-Markovian Rewards
Compact Mathematical Programs For DEC-MDPs With Structured Agent Interactions
dis.cs.umass.edu
On occupation measures for total-reward MDPs
2008 47th IEEE Conference on Decision and Control, 2008
Solving Relational MDPs with Exogenous Events and Additive Rewards
Lecture Notes in Computer Science, 2013
Planning with deadlines in stochastic domains
1993
A Geometric Approach to Find Nondominated Policies to Imprecise Reward MDPs
Lecture Notes in Computer Science, 2011
Decision-theoretic planning: Structural assumptions and computational leverage
Craig Boutilier, Cs Chair Tom Dean
Journal of Artificial Intelligence Research, 1999
Anytime planning for decentralized POMDPs using expectation maximization
2012
Coordinated plan management using multiagent MDPs
Proceedings of the 2006 …, 2006
A framework for planning in continuous-time stochastic domains
2003
Planning with Macro-Actions in Decentralized POMDPs
INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS & MULTIAGENT SYSTEMS
Optimally Solving Dec-POMDPs as Continuous-State MDPs
IJCAI, 2013
Solving Multi-agent MDPs Optimally with Conditional Return Graphs
Decision-Theoretic Planning with non-Markovian Rewards
Journal of Artificial Intelligence Research, 2006
Symbolic Dynamic Programming for Continuous State and Action MDPs
Proceedings of the AAAI Conference on Artificial Intelligence
Process-oriented planning and average-reward optimality
1995
Modeling and Planning with Macro-Actions in Decentralized POMDPs
Journal of Artificial Intelligence Research, 2019
Exploiting separability in multiagent planning with continuous-state MDPs
Symbolic dynamic programming for first-order MDPs
International Joint Conference on Artificial …, 2001
Optimally solving Dec-POMDPs as Continuous-State MDPs: Theory and Algorithms
HAL (Le Centre pour la Communication Scientifique Directe), 2014
Steady-State Policy Synthesis for Verifiable Control
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, 2019
Policy generation for continuous-time stochastic domains with concurrency
ICAPS'04, 2004
Markov Decision Processes with Long-Term Average Constraints
ArXiv, 2021
Probabilistic planning via linear value-approximation of first-order MDPs
2005
Planning under time constraints in stochastic domains
Artificial Intelligence, 1995