The 2016 NIST Speaker Recognition Evaluation
MIT Lincoln Laboratory Lexington United States
Pagination or Media Count:
In 2016, the National Institute of Standards and Technology NIST conducted the most recent in an ongoing series of speaker recognition evaluations SRE to foster research in robust text-independent speaker recognition, as well as measure performance of current state-of-the-art systems. Compared to previous NIST SREs, SRE16 introduced several new aspects including an entirely online evaluation platform, a fixed training data condition, more variability in test segment duration uniformly distributed between 10s and 60s, the use of non-English Cantonese, Cebuano, Mandarin and Tagalog conversational telephone speech CTS collected outside North America, and providing labeled and unlabeled development a.k.a. validation sets for system hyperparameter tuning and adaptation. The introduction of the new non-English CTS data made SRE16 more challenging due to domainchannel and language mismatches as compared to previous SREs. A total of 66 research organizations from industry and academia registered for SRE16, out of which 43 teams submitted 121 valid system outputs that produced scores. This paper presents an overview of the evaluation and analysis of system performance over all primary evaluation conditions. Initial results indicate that effective use of the development data was essential for the top performing systems, and that domainchannel, language, and duration mismatch had an adverse impact on system performance.
- Voice Communications