UT Austin in the TREC 2012 Crowdsourcing Track's Image Relevance Assessment Task
TEXAS UNIV AT AUSTIN SCHOOL OF INFORMATION
Pagination or Media Count:
We describe our submission to the Image Relevance Assessment Task IRAT at the 2012 Text REtrieval Conference TREC Crowdsourcing Track. Four aspects distinguish our approach 1 an interface for cohesive, efficient topic-based relevance judging and reporting judgment confidence 2 a variant of Welinder and Peronas method for online crowdsourcing 17 inferring quality of the judgments and judges during data collection in order to dynamically optimize data collection 3 a completely unsupervised approach using no labeled data for either training or tuning and 4 automatic generation of individualized error reports for each crowd worker, supporting transparent assessment and education of workers. Our system was built start-to-finish in two weeks, and we collected approximately 44,000 labels for about 40 US.
- Information Science