Building Reliable Metaclassifiers for Text Learning
CARNEGIE-MELLON UNIV PITTSBURGH PA SCHOOL OF COMPUTER SCIENCE
Pagination or Media Count:
Appropriately combining information sources to form a more effective output than any of the individual sources is a broad topic that has been researched in many forms. This dissertation addresses one subfield of this domain leveraging locality when combining classifiers for text classification. We begin by discussing the role calibrated probabilities play when combining classifiers. Reflecting on the lessons learned from the study of calibration, we go on to define local calibration, dependence, and variance and discuss the roles they play in classifier combination. Using these insights as motivation, we introduce a series of reliability-indicator variables which serve as an intuitive abstraction of the input domain to capture the local context related to a classifiers reliability. We then introduce the main methodology of our work, STRIVE, which uses metaclassifiers and reliability indicators to produce improved classification performance. Next, we briefly review online-learning classifier combination algorithms that have theoretical performance guarantees in the online setting and consider adaptations of these to the batch settings as alternative metaclassifiers. We then present empirical evidence that they are weaker in the offline setting than methods which employ standard classification algorithms as metaclassifiers, and we suggest future improvements likely to yield more competitive algorithms. Finally, the combination approaches discussed are broadly applicable to classification problems other than topic classification, and we emphasize this with experiments that demonstrate STRIVE improves performance of action item detectors in e-mail, a task where both the semantics and base classifier performance are significantly different than topic classification.
- Administration and Management
- Information Science
- Operations Research