Adaptive Language Modeling Using the Maximum Entropy Principle
IBM THOMAS J WATSON RESEARCH CENTER YORKTOWN HEIGHTS NY
Pagination or Media Count:
We describe our ongoing efforts at adaptive statistical language modeling. Central to our approach is the Maximum Entropy ME Principle, allowing us to combine evidence from multiple sources, such as long-distance triggers and conventional short-distance trigrams. Given consistent statistical evidence, a unique ME solution is guaranteed to exist, and an iterative algorithm exists which is guaranteed to converge to it. Among the advantages of this approach are its simplicity, its generality, and its incremental nature. Among its disadvantages are its computational requirements. We describe a succession of ME models, culminating in our current Maximum Likelihood Maximum Entropy MLME model. Preliminary results with the latter show a 27 perplexity reduction as compared to a conventional trigram model.