Evolving Neural Networks for Nonlinear Control.
Final rept. 15 Mar 93-31 Aug 96,
GEORGE MASON UNIV FAIRFAX VA
Pagination or Media Count:
An approach to creating Amorphous Recurrent Neural Networks ARNN using Genetic Algorithms GA called 2pGA has been developed and shown to be effective in evolving neural networks for the control and stabilization of both linear and nonlinear plants, the optimal control for a nonlinear regulator problem, the XOR problem, and an amplitude modulation AM detector. This new approach consists of a two-phase GA with the first phase using a set of Lindenmayer System L-System production rules to evolve the NN architectures, and the second phase using genetic hill-climbing for connectivity weight tuning. The resulting amorphous non-layered recurrent NNs are real-valued as opposed to the binary-valued nets generated by the original GANNET program. Integral absolute error was the fitness function used in these experiments. A striking indirect result of this research is the few number of neurons which are required to effect the compensation and stabilization. Typical networks are from 4-15 neurons. The inclusion of a neural insertiondeletion operator in both the 2pGA and GANNET2 methods allows for the size of the NN to be evolved. This capability has been used to develop an empirical relationship between problem complexity and the required NN complexity. Problem complexity is measured by the number of symbols required to differentiate among binary patterns in a pattern recognition task. NN complexity is measured by number of neurons. while not yet definitive, empirical data from ARNNs evolved by GANNET2 show what appears to be a logarithmic relationship between the complexity of a regular expression and the size of a recurrent neural network which recognize it. Additional experiments are being performed to extend the region of evolved data to improve our confidence in this conclusion.