Generalizations of the Alternating Direction Method of Multipliers for Large-Scale and Distributed Optimization
RICE UNIV HOUSTON TX DEPT OF COMPUTATIONAL AND APPLIED MATHEMATICS
Pagination or Media Count:
The alternating direction method of multipliers ADMM has been revived in recent years due to its effectiveness at solving many large-scale and distributed optimization problems, particularly arising from the areas of compressive sensing, signal and image processing, machine learning and statistics. This thesis makes important generalizations to ADMM as well as extending its convergence theory. We propose a generalized ADMM framework that allows more options of solving the subproblems, either exactly or approximately. Such generalization is of great practical importance because it brings more flexibility in solving the subproblems efficiently, thereby making the entire algorithm run faster and scale better for large problems. We establish its global convergence and further show its linear convergence under a variety of scenarios, which cover a wide range of applications. The derived rate of convergence also provides some theoretical guidance for optimizing the parameters of the algorithm. In addition, we introduce a simple technique to improve an existing convergence rate from O1k to o1k.
- Operations Research