Accession Number:

ADA224517

Title:

Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering

Descriptive Note:

Memorandum rept.

Corporate Author:

MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB

Personal Author(s):

Report Date:

1990-04-01

Pagination or Media Count:

21.0

Abstract:

Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization theory. The theory developed in Poggio and Girosi 1989 shows the equivalence between regularization and a class of three-layer networks that we call regularization networks or Hyper Basis Functions. These networks are not only equivalent to generalized splines, but are also closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by defining a general form of these networks with two sets of modifiable parameters in addition to the coefficients C sub alpha moving centers and adjustable norm- weights. Moving the centers is equivalent to task-dependent clustering and changing the norm weights is equivalent to task-dependent dimensionality reduction. KR

Subject Categories:

  • Biology
  • Statistics and Probability
  • Cybernetics

Distribution Statement:

APPROVED FOR PUBLIC RELEASE