Global Optimization via SPSA
Abstract:
A desire with iterative optimization techniques is that the algorithm reaches the global optimum rather than getting stranded at a local optimum value. In this paper, the authors examine the theoretical and numerical global convergence properties of a certain gradient free stochastic approximation algorithm called SPSA, that has performed well in complex optimization problems. They establish two theorems on the global convergence of SPSA. The first provides conditions under which SPSA will converge in probability to a global optimum using the well-known method of injected noise. The injected noise prevents the algorithm from converging prematurely to a local optimum point. In the second theorem they show that, under different conditions, basic SPSA without injected noise can achieve convergence in probability to a global optimum. This occurs because the noise is effectively and automatically introduced into the algorithm by the special form of the SPSA gradient approximation. This global convergence without injected noise can have important benefits in the setup tuning and performance rate of convergence of the algorithm. The discussion is supported by numerical studies showing favorable comparisons of SPSA to simulated annealing and genetic algorithms.