University of California - Santa Barbara Santa Barbara United States
Major Goals 1. Apply the recurrent neural network configuration, long short term memory LSTM, as a universal learning core that can predict connectivity maps across independent neuronal networks, classify neurological conditions and transmit synthetic memory traces. Recordings from multiple electrodes in intact animals are commonly used to train a generalized linear model GLM and the model is validated by how effectively the GLM filters predict a behavior. We have developed a system that allows us to directly test the predictions derived from a trained GLM. We built a Neural Circuit Probe that is positioned above a multi- electrode array MEA and is capable of identifying the specific neuron from which an MEA signal arises and deliver a chemical reagent such as TTX to that specific neuron. In this manner, if the GLM predicts an inhibitory connection to another neuron we can directly validate the prediction. This level of neuronal connectivity prediction has not been previously accomplished. However, the GLM has inherent limitations mainly related to the assumptions required in by its computation. Among the most limiting assumptions is the necessity to set the amount of history used for the predictions. The use of Long short-term memory LSTM architecture represents a promising and incompletely explored approach for prediction of spikes. By training a deep neural network with many neural traces, the machine will learn to recognize the patterns as has been demonstrated for other complex problems such as the board game GO that entail numerous instantiations inaccessible to simple rule-based analyses and storage of all possible configurations. LSTM is particularly wellsuited to classify, process and predict time series when, in contrast to the GLM, the time lags between events are very long and of unknown size.