Acceleration of Convergence of a Vector Sequence by Reduced Rank Extrapolation.

reportActive / Technical Report | Accession Number: ADA108765 | Open PDF

Abstract:

A new family of methods, called reduced rank extrapolation, is developed for accelerating convergence of the sequence of vectors generated during the iterative solution of a system of m linear algebraic equations in m unknowns. Large systems of this kind arise, for example, in the finite difference or finite element solution of partial differential equations. Reduced rank extrapolation is derived from full rank extrapolation, which is a straightforward generalization to vector space of the well known Aitken delta sq.-shanks e sub 1, scalar extrapolation. It is applicable when the iteration has reached a point where only a few, say r, eigenvalues dominate the situation and hence only r difference vectors can be linearly independent to specified tolerance. The rank, r, is determined during the solution of an auxiliary problem of best approximation in vector space, i.e., best in the sense of minimizing some specified vector norm. Least squares theory, corresponding to the Euclidean norm, is developed in detail herein. Application to Laplaces equation in a square and in a cube yielded reduction in computation time by a factor ranging between 2.4 and 4.7, and reduction in iteration count by a factor ranging between 3.6 and 5.4. Author

Security Markings

DOCUMENT & CONTEXTUAL SUMMARY

Distribution:
Approved For Public Release

RECORD

Collection: TR
Identifying Numbers
Subject Terms