Optimisation Algorithms for Highly Parallel Computer Architectures
Final draft rept.
HATFIELD POLYTECHNIC (UNITED KINGDOM) NUMERICAL OPTIMISATION CENTRE
Pagination or Media Count:
This paper considers the design of optimization algorithms to run efficiently on highly parallel computer architectures. Most efficient optimization algorithms require the calculation of first and possibly second derivatives of the objective function and where present the constraints. This task normally dominates all other tasks undertaken in solving a large sized problem. The other main task is usually the solution of a set of linear equations. In this paper the authors describe our experience of solving these two tasks on parallel computer architecture. A sparse forward implementation of doublet and triplet automatic differentiation is described that enables both the gradient and hessian of objective functions to be accurately and cheaply solved. It is shown that when the function is partially separable this can be performed very effectively on a parallel machine. The effect of solving sets of linear equations on a parallel system is also described, and the two then combined in effective algorithms for both unconstrained and constrained optimization.
- Computer Hardware