Accession Number:

ADA234425

Title:

Extensions of a Theory of Networks and Learning: Outliers and Negative Examples

Descriptive Note:

Memorandum rept.

Corporate Author:

MASSACHUSETTS INST OF TECH CAMBRIDGE ARTIFICIAL INTELLIGENCE LAB

Report Date:

1990-07-01

Pagination or Media Count:

27.0

Abstract:

Learning an input output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi dimensional function. From this point of view, this form of learning is closely related to regularization theory. The theory developed in Poggio and Girosi 1989 shows the equivalence between regularization and a class of three-layer networks that we call regularization networks or Hyper Basis Functions. These networks are not only equivalent to generalized splines, but are closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by introducing ways of dealing with two aspects of learning learning in the presence of unreliable examples and learning from positive and negative examples. These two extensions are interesting also from the point of view of the approximation of multivariate functions. The first extension corresponds to dealing with outliers among the sparse data. The second one corresponds to exploiting information about points or regions in the range of the function that are forbidden.

Subject Categories:

  • Statistics and Probability
  • Cybernetics

Distribution Statement:

APPROVED FOR PUBLIC RELEASE