The capacity of model neural networks to generalize from a partial set of information is an area of much current interest. In some sense it addresses the very issue of how accurate current models are of higher cognitive processes, for the ability to categorize input, to make generalizations based on a limited set of information, is one of the hallmarks of these processes. In this context, the author has been investigating the Backward Propagation of Error Model due to Rumelhart et. al. The model is a deterministic approach which seeks to teach a desired input-output mapping by repeated presentation of the desired mapping to the system, correcting the system connections based on the error in output. We have begun to address the generalization capability of this system. Specifically, we have studied to what extent the set of connections which evolve in learning a partial set of patterns are a general solution to a given mapping. That is, if we teach several examples of a mapping to the system, will the solutions that the system discovers for these patterns be capable of generalizing and correctly identifying other input states that have not been seen. The results of some simulations undertaken to address this question are discussed and some modifications to the model which we have proposed are indicated.