A Self-Organizing Neural Network Architecture for Auditory and Speech Perception with Applications to Acoustic and Other Temporal Prediction Problems
Annual rept. 1 May 1992-30 Apr 1993
BOSTON UNIV MA
Pagination or Media Count:
This project is developing autonomous neural network models for the real-time perception and production of acoustic and speech signals. A new acoustic filter was developed to show how coarticulated context-sensitive auditory signals can be separated and represented in a more context-independent fashion, thereby easing the recognition problem. Parallel processing streams sensitive to sustained and transient signals are used, as in vision. A model of working memory was developed that automatically compensates for variable acoustic or speech rates. The model shows how invariance of the short term storage of variable-rate acoustic streams can explain data about categorical boundary shifts when the distributions of silent intervals or of vowel durations are altered. New learning and categorization nets were shown to discriminate vowels with comparable accuracy but much higher compression than alternative methods. Models of skilled motor control were developed to clarify how speech and arm movements can be planned and flexibly modified by task requirements. Studies of neural oscillators suggest how rhythmic behaviors relevant to perception and action, notably synchronous oscillations, may be generated and controlled.