Overview of Results of the MUC-6 Evaluation
NAVAL COMMAND CONTROL AND OCEAN SURVEILLANCE CENTER SAN DIEGO CA
Pagination or Media Count:
The latest in a series of natural language processing system evaluations was concluded in October 1995 and was the topic of the Sixth Message Understanding Conference MUC-6 in November. Participants were invited to enter their systems in as many as four different task-oriented evaluations. The Named Entity and Conference tasks entailed Standard Generalized Markup Language SGML annotation of texts and were being conducted for the first time. The other two tasks, Template Element and Scenario Template, were information extraction tasks that followed on from the MUC evaluations conducted in previous years. The evolution and design of the MUC- 6 evaluation are discussed in the paper by Grishman and Sundheim in this volume. All except the Scenario Template task are defined independently of any particular domain. This paper surveys the results of the evaluation on each task and, to a more limited extent, across tasks. Discussion of the results for each task is organized generally under the following topics Results on task as whole Results on some aspects of task Performance on walkthrough article. The walkthrough article is an article selected from the test set. Participants were asked to analyze their systems performance on that article and comment on it in their presentations and papers. Permission has been granted by Dow Jones for the full text of the article to be reprinted in this proceedings. It appears in full in the first part of appendix A, and various site reports may contain excerpts from it or annotated versions of it. Also in appendix A are representations of the information contained in the answer key for the walkthrough article for each of the four tasks.
- Information Science