Accession Number:

ADA277526

Title:

A Self-Organizing Neural Network Architecture for Auditory and Speech Perception with Applications to Acoustic and Other Temporal Prediction Problems

Descriptive Note:

Annual rept. 1 May 1992-30 Apr 1993

Corporate Author:

BOSTON UNIV MA

Personal Author(s):

Report Date:

1994-01-01

Pagination or Media Count:

13.0

Abstract:

This project is developing autonomous neural network models for the real-time perception and production of acoustic and speech signals. A new acoustic filter was developed to show how coarticulated context-sensitive auditory signals can be separated and represented in a more context-independent fashion, thereby easing the recognition problem. Parallel processing streams sensitive to sustained and transient signals are used, as in vision. A model of working memory was developed that automatically compensates for variable acoustic or speech rates. The model shows how invariance of the short term storage of variable-rate acoustic streams can explain data about categorical boundary shifts when the distributions of silent intervals or of vowel durations are altered. New learning and categorization nets were shown to discriminate vowels with comparable accuracy but much higher compression than alternative methods. Models of skilled motor control were developed to clarify how speech and arm movements can be planned and flexibly modified by task requirements. Studies of neural oscillators suggest how rhythmic behaviors relevant to perception and action, notably synchronous oscillations, may be generated and controlled.

Subject Categories:

  • Cybernetics
  • Acoustics

Distribution Statement:

APPROVED FOR PUBLIC RELEASE