Low Sensitivity Interpolation Using Feed-Forward Neural Networks With One Hidden Layer
Final rept. Oct 1990-Oct 1991
NAVAL WEAPONS CENTER CHINA LAKE CA
Pagination or Media Count:
It is possible to assign directly the weights of a feed-forward neural net with one hidden layer so that the network interpolates through a given set of input-output points exactly and in such a way that the sensitivity to noise at the points of interpolation is as small as desired. This is demonstrated with a constructive proof. The weight assignment for exact interpolation requires the inversion of a nonsingular matrix. If the exact interpolation requirement is relaxed, then the inversion of that matrix can be avoided. It is possible to determine weights so that the network approximately interpolates through the set of points with any desired degree of accuracy and with a sensitivity as small as desired. Both the accuracy of interpolation and the sensitivity to noise are controlled by the size of the weights in the first layer of weights. Estimates on how large these weights have to be to achieve a desired interpolation accuracy and noise sensitivity are derived. An algorithm for approximate interpolation with low sensitivity is presented and illustrated with simple examples. weight assignment, exactapproximate interpolation, low sensitivity, total derivative.