Reinforcement Learning from Human Reward: Discounting in Episodic Tasks
Journal Article - Open Access
University of Texas at Austin Austin United States
Pagination or Media Count:
Several studies have demonstrated that teaching agents by human-generated reward can be a powerful technique. However, the algorithmic space for learning from human reward has hitherto not been explored systematically. Using model-based reinforcement learning from human reward in goal-based, episodic tasks, we investigate how anticipated future rewards should be discounted to create behavior that performs well on the task that the human trainer intends to teach. We identify a positive circuits problem with low discounting i.e., high discount factors that arises from an observed bias among humans towards giving positive reward. Empirical analyses indicate that high discounting i.e., low discount factors of human reward is necessary in goal-based, episodic tasks and lend credence to the existence of the positive circuits problem.