Accession Number:

ADA585093

Title:

Distributed Reinforcement Learning for Policy Synchronization in Infinite-Horizon Dec-POMDPs

Descriptive Note:

Technical rept.

Corporate Author:

UNIVERSITY OF SOUTHERN MISSISSIPPI HATTIESBURG SCHOOL OF COMPUTING

Personal Author(s):

Report Date:

2012-01-01

Pagination or Media Count:

11.0

Abstract:

In many multi-agent tasks, agents face uncertainty about the environment, the outcomes of their actions, and the behaviors of other agents. Dec-POMDPs offer a powerful modeling framework for sequential, cooperative, multiagent tasks under uncertainty. Solution techniques for infinite-horizon Dec-POMDPs have assumed prior knowledge of the model and have required centralized solvers. We propose a method for learning Dec-POMDP solutions in a distributed fashion. We identify the issue of policy synchronization that distributed learners face and propose incorporating rewards into their learned model representations to ameliorate it. Most importantly, we show that even if rewards are not visible to agents during policy execution, exploiting the information contained in reward signals during learning is still beneficial.

Subject Categories:

  • Administration and Management
  • Government and Political Science
  • Psychology
  • Statistics and Probability

Distribution Statement:

APPROVED FOR PUBLIC RELEASE