Noise Perturbation Improves Supervised Speech Separation
Abstract:
Speech separation can be treated as a mask estimation problem where interference dominant portions are masked in a time-frequency representation of noisy speech. In supervised speech separation, a classifier is typically trained on a mixture set of speech and noise. It is important to efficiently utilize limited training data to make the classifier generalize well. When target speech is severely interfered by a nonstationary noise, a classifier tends to mistake noise patterns for speech patterns. Expansion of a noise through proper perturbation during training helps to expose the classifier to a broader variety of noisy conditions, and hence may improve separation performance. In this study, we examine the effects of three noise perturbations on supervised speech separation noise rate, vocal tract length, and frequency perturbation at low signal-to-noise ratios SNRs. We evaluate speech separation performance in terms of classification accuracy, hit minus false-alarm rate and short-time objective intelligibility STOI. The experimental results show that frequency perturbation is the best among the three perturbations in terms of improved speech separation. In particular, we find that frequency perturbation is effective in reducing the error of misclassifying a noise pattern as a speech pattern.