Morphological Cues for Lexical Semantics
ROCHESTER UNIV NY DEPT OF COMPUTER SCIENCE
Pagination or Media Count:
Most natural language processing tasks require lexical semantic information such as verbal argument structure and selectional restrictions, corresponding nominal semantic class, verbal aspectual class, synonym and antonym relationships between words, and various verbal semantic features such as causation and manner. This dissertation addresses two primary questions related to such information how should one represent it and how can one acquire it. It is argued that, in order to support inferencing, a representation with well-understood semantics should be used. Standard first order logic has well-understood semantics and a multitude of inferencing systems have been implemented for it. However, standard first order logic, although a good starting point, needs to be extended before it can efficiently and concisely support all the lexically-based inferences needed. Using data primarily from the TRAINS dialogues, the following extensions are argued for modal operators, predicate modification, restricted quantification, and non-standard quantifiers. These representational tools are present in many systems for sentence-level semantics but have not been discussed in the context of lexical semantics. A number of approaches to automatic acquisition are considered and it is argued that a surface cueing approach is currently the most promising. Morphological cueing, a type of surface cueing, is introduced. It makes use of fixed correspondences between derivational affixes and lexical semantic information. The semantics of a number of affixes are discussed and data resulting from the application of the method to the Brown corpus is presented. Finally, even if lexical semantics could be acquired on a large scale, natural language processing systems would continue to encounter unknown words. Derivational morphology can also be used at run-time to help natural language understanding systems deal with unknown words.