Improving LSTM-based Video Description with Linguistic Knowledge Mined from Text
Journal Article - Open Access
NATIONAL SCIENCE FOUNDATION ARLINGTON VA ARLINGTON United States
Pagination or Media Count:
This paper investigates how linguistic knowledge mined from large text corpora can aid the generation of natural language descriptions of videos. Specifically, we integrate both a neural language model and distributional semantic strained on large text corpora into a recent LSTM-based architecture for video description. We evaluate our approach on a collection of Youtube videos as well as two large movie description datasets showing significant improvements in grammaticality while modestly improving descriptive quality.