DID YOU KNOW? DTIC has over 3.5 million final reports on DoD funded research, development, test, and evaluation activities available to our registered users. Click HERE
to register or log in.
Learning Representation and Control in Markov Decision Processes
Final rept. 1 Aug 2010-31 Jul 2013
MASSACHUSETTS UNIV AMHERST DEPT OF COMPUTER SCIENCE
Pagination or Media Count:
This research investigated algorithms for approximately solving Markov decision processes MDPs, a widely used model of sequential decision making. Much past work on solving MDPs in adaptive dynamic programming and reinforcement learning has assumed representations, such as basis functions, are provided by a human expert. The research investigated a variety of approaches to automatic basis construction, including reward-sensitive and reward-invariant methods, diagonalization and dilation methods, as well as orthogonal and over-complete representations. A unifying perspective on the various basis construction methods emerges from showing they result from different power series expansions of value functions, including the Neumann series expansion, the Laurent series expansion, and the Schultz expansion. The research also develops new computational algorithms for learning sparse solutions to MDPs using convex optimization methods.
APPROVED FOR PUBLIC RELEASE