Accession Number : ADA360982
Title : Layered Learning in Multi-Agent Systems
Descriptive Note : Doctoral theses
Corporate Author : CARNEGIE-MELLON UNIV PITTSBURGH PA SCHOOL OF COMPUTER SCIENCE
Personal Author(s) : Stone, Peter H.
Report Date : 15 DEC 1998
Pagination or Media Count : 247
Abstract : Multi-agent systems in complex, real-time domains require agents to act effectively both autonomously and as part of a team. This dissertation addresses multi-agent systems consisting of teams of autonomous agents acting in real-time, noisy, collaborative, and adversarial environments. Because of the inherent complexity of this type of multi-agent system, this thesis investigates the use of machine learning within multi-agent systems. The dissertation makes four main contributions to the fields of Machine Learning and Multi-Agent Systems. First, the thesis defines a team member agent architecture within which a flexible team structure is presented, allowing agents to decompose the task space into flexible roles and allowing them to smoothly switch roles while acting. Team organization is achieved by the introduction of a locker-room agreement as a collection of conventions followed by all team members. It defines agent roles, team formations, and pre-compiled multi-agent plans. In addition, the team member agent architecture includes a communication paradigm for domains with single-channel, low-bandwidth, unreliable communication. The communication paradigm facilitates team coordination while being robust to lost messages and active interference from opponents. Second, the thesis introduces layered learning, a general-purpose machine learning paradigm for complex domains in which learning a mapping directly from agents' sensors to their actuators is intractable. Given a hierarchical task decomposition, layered learning allows for learning at each level of the hierarchy, with learning at each level directly affecting learning at the next higher level. Third, the thesis introduces a new multi-agent reinforcement learning algorithm, namely team-partitioned, opaque-transition reinforcement learning (TPOT-RL). TPOT-RL is designed for domains in which agents cannot necessarily observe the state changes when other team members act.
Descriptors : *ROBOTICS , *NEURAL NETS , *LEARNING MACHINES , *ARTIFICIAL INTELLIGENCE , IMAGE PROCESSING , DISTRIBUTED DATA PROCESSING , COMPUTER COMMUNICATIONS , REAL TIME , THESES , GAME THEORY , CLIENT SERVER SYSTEMS.
Subject Categories : CYBERNETICS
Distribution Statement : APPROVED FOR PUBLIC RELEASE