Accession Number : AD1007863


Title :   Speaker-dependent Multipitch Tracking Using Deep Neural Networks


Descriptive Note : Technical Report


Corporate Author : Ohio State University Columbus United States


Personal Author(s) : Liu,Yuzhou ; Wang,DeLiang


Full Text : https://apps.dtic.mil/dtic/tr/fulltext/u2/1007863.pdf


Report Date : 01 Jan 2015


Pagination or Media Count : 22


Abstract : Multipitch tracking is important for speech and signal processing. However, it is challenging to design an algorithm that achieves accurate pitch estimation and correct speaker assignment at the same time. In this paper, we use deep neural networks (DNNs) to model the probabilistic pitch states of two simultaneous speakers. To capture speaker-dependent information, we propose two types of DNN with different training strategies. Thefirst is trained for each speaker enrolled in the system (speaker-dependent DNN), and the second is trained for each speaker pair (speaker-pair-dependent DNN). Several extensions, including gender-pair-dependent DNNs, speaker adaptation of gender-pair-dependent DNNs and multi-ratio training, are introduced later to relax constraints. A factorial hidden Markov model (FHMM) then integrates pitch probabilities and generates the most likely pitch tracks with a junction tree algorithm. Experiments show that the proposed methods substantially outperform other speaker-independent and speaker-dependent multipitch trackers on two-speaker mixtures. With multi-ratio training, our methods achieve consistent performance at various energies ratios of the two speakers in a mixture.


Descriptors :   hidden markov models , automated speech recognition , artificial neural networks , algorithms , probabilistic models , accuracy


Distribution Statement : APPROVED FOR PUBLIC RELEASE