Learning in Large-Scale Games and Cooperative Control
CALIFORNIA UNIV BERKELEY DEPT OF MECHANICAL ENGINEERING
Pagination or Media Count:
Many engineering systems can be characterized as a large scale collection of interacting subsystems each having access to local information, making local decisions, having local interactions with neighbors, and seeking to optimize local objectives that may well be in conflict with other subsystems. The analysis and design of such control systems falls under the broader framework of complex and distributed systems . Other names include multi-agent control, cooperative control, networked control, as well as team theory or swarming. Regardless of the nomenclature, the central challenge remains the same. That is to derive desirable collective behaviors through the design of individual agent control algorithms. The potential benefits of distributed decision architectures include the opportunity for real-time adaptation or self-organization and robustness to dynamic uncertainties such as individual component failures, non-stationary environments, and adversarial elements. These benefits come with significant challenges, such as the complexity associated with a potentially large number of interacting agents and the analytical difficulties of dealing with overlapping and partial information. This dissertation focuses on dealing with the distributed nature of decision making and information processing through a non-cooperative game-theoretic formulation. The interactions of a distributedmulti-agent control system are modeled as a noncooperative game among agents with the desired collective behavior being expressed as a Nash equilibrium. In large scale multi-agent systems, agents are inherently limited in both their observational and computational capabilities. Therefore, this dissertation focuses on learning algorithms that can accommodate these limitations while still guaranteeing convergence to a Nash equilibrium.
- Operations Research