Accession Number:

ADA617226

Title:

Exploiting Multi-Step Sample Trajectories for Approximate Value Iteration

Descriptive Note:

Conference paper

Corporate Author:

AIR FORCE RESEARCH LAB ROME NY INFORMATION DIRECTORATE

Report Date:

2013-09-01

Pagination or Media Count:

17.0

Abstract:

Approximate value iteration methods for reinforcement learning RL generalize experience from limited samples across large state-action spaces. The function approximators used in such methods typically introduce errors in value estimation which can harm the quality of the learned value functions. We present a new batch-mode, off-policy, approximate value iteration algorithm called Trajectory Fitted Q-Iteration TFQI. This approach uses the sequential relationship between samples within a trajectory, a set of samples gathered sequentially from the problem domain, to lessen the adverse influence of approximation errors while deriving long-term value. We provide a detailed description of the FTQI approach and an empirical study that analyzes the impact of our method on two well-known RL benchmarks. Our experiments demonstrate this approach has significant benefits including better learned policy performance, improved convergence, and some decreased sensitivity to the choice of function approximation.

Subject Categories:

  • Operations Research
  • Cybernetics

Distribution Statement:

APPROVED FOR PUBLIC RELEASE