Equations of Learning and Capacity of Layered Neural Networks
Final rept. Oct 1987-May 1989,
NAVAL WEAPONS CENTER CHINA LAKE CA
Pagination or Media Count:
Learning in a layered neural network LNN amounts to finding the correct interconnection weights that will produce the right input-output pairs. The input-output pairs and the architecture of the net define equations that the weights must satisfy. These equations, the Equations of Learning, are derived in this paper. By applying well-known results from dimension theory to these equations, one can derive an upper bound on the number of different input-output pairs that a layered network can learn. Two simple examples are used to illustrate this result and its limitations. While analyzing the first example, it was discovered that the saturation of the sigmoid function is a desirable feature. The concepts of architecture and capacity of an LNN are defined, and a few results on architecture with maximal capacity are included. It was found that reducing the dimension of the output patterns increases the capacity of the LNN.
- Numerical Mathematics
- Computer Hardware