Kernel-Based Approximate Dynamic Programming Using Bellman Residual Elimination
MASSACHUSETTS INST OF TECH CAMBRIDGE
Pagination or Media Count:
Many sequential decision-making problems related to multi-agent robotic systems can be naturally posed as Markov Decision Processes MDPs. An important advantage of the MDP framework is the ability to utilize stochastic system models, thereby allowing the system to make sound decisions even if there is randomness in the system evolution over time. Unfortunately, the curse of dimensionality prevents most MDPs of practical size from being solved exactly. One main focus of the thesis is on the development of a new family of algorithms for computing approximate solutions to large-scale MDPs. Our algorithms are similar in spirit to Bellman residual methods, which attempt to minimize the error incurred in solving Bellmans equation at a set of sample states. However, by exploiting kernel-based regression techniques such as support vector regression and Gaussian process regression with nondegenerate kernel functions as the underlying cost-to-go function approximation architecture, our algorithms are able to construct cost-to-go solutions for which the Bellman residuals are explicitly forced to zero at the sample states. For this reason, we have named our approach Bellman residual elimination BRE. In addition to developing the basic ideas behind BRE, we present multi-stage and model-free extensions to the approach. The multistage extension allows for automatic selection of an appropriate kernel for the MDP at hand, while the model-free extension can use simulated or real state trajectory data to learn an approximate policy when a system model is unavailable.
- Statistics and Probability