On Convergence of the Nelder-Mead Simplex Algorithm for Unconstrained Stochastic Optimization
AIR FORCE INST OF TECH WRIGHT-PATTERSON AFB OH
Pagination or Media Count:
The Nelder-Mead simplex method is a direct search algorithm that has found widespread use in the optimization of nonlinear functions. Originally designed for deterministic optimization, the method is robust with respect to small perturbations in the functions values and therefore, this method has been used for optimizing stochastic functions as well. However, if the random perturbations in the functions values are large enough the method may terminate before reaching the optimizer of the expected function. We prove convergence of the simplex to a point with probability 1 for constant functions with additive noise for 1- and 2-dimensional functions and provide empirical evidence for the same result in higher dimensions. This result implies that as the amount of noise increases, differences between the expected functions values at the vertices of the simplex become obscured and the probability of terminating early is increased. Also, we demonstrate empirically and analytically the probability of early termination on an unbounded univariate linear function with additive noise. We propose a new modification for the Nelder-Mead simplex method for use in stochastic optimization. This new method reduces to the original method when the noise level is negligible or nonexistent and therefore, it is as efficient as the Nelder-Mead method when the noise level is low. And in Monte Carlo experiments, our proposed method continued to reduce the expected function for as long as the simulations were run whereas the original method and the previously best know modified simplex methods terminated early.
- Operations Research