An Evolutionary Random Policy Search Algorithm for Solving Markov Decision Processes
University of Maryland College Park United States
Pagination or Media Count:
This paper presents a new randomized search method called Evolutionary Random Policy Search ERPS for solving infinite horizon discounted cost Markov Decision Process MDP problems. The algorithm is particularly targeted at problems with large or uncountable action spaces. ERPS approaches a given MDP by iteratively dividing it into a sequence of smaller, random, sub-MDP problems based on information obtained from random sampling of the entire action space and local search. Each sub-MDP is then solved approximately by using a variant of the standard policy improvement technique, where an elite policy is obtained. We show that the sequence of elite policies converges to an optimal policy with probability one. An adaptive version of the algorithm that improves the efficiency of the search process while maintaining the convergence properties of ERPS is also proposed. Some numerical studies are carried out to illustrate the algorithm and compare it with existing procedures.