Active Learning for Automatic Audio Processing of Unwritten Languages (ALAPUL)
Technical Report,01 Jun 2015,01 Jul 2016
SRI International Menlo Park United States
Pagination or Media Count:
This work addresses automatic transcription for languages without usable written resources. Previous work has addressed this problem using entirely unsupervised methodologies. Our approach in contrast investigates the use of linguistic and speaker knowledge which are often available even if text resources are not. We create a framework that benefits from such resources, not assuming orthographic representations and avoiding manual generation of word-level transcriptions. We adapt a universal phone recognizer to the target language and use it to convert audio into a searchable phone string for lexical unit discovery via fuzzy sub-string matching. Linguistic knowledge is used to constrain phone recognition output. Target language speakers are used to assist a linguist in creating phonetic transcriptions for the adaptation of acoustic and language models, by re-speaking more clearly a small portion of the target language audio. We also explore robust features and feature transform through deep auto-encoders for better phone recognition performance. We target iterative learning to improve the system through multiple iterations of user feedback.