Sequence to Sequence - Video to Text
University of Texas at Austin Austin United States
Pagination or Media Count:
Real-world videos often have complex dynamics and methods for generating open-domain video descriptions should be sensitive to temporal structure and allow both in-put sequence of frames and output sequence of words of variable length. To approach this problem, we propose a novel end-to-end sequence-to-sequence model to generate captions for videos. For this we exploit recurrent neural net-works, specifically LSTMs, which have demonstrated state-of-the-art performance in image caption generation. Our LSTM model is trained on video-sentence pairs and learns to associate a sequence of video frames to a sequence of words in order to generate a description of the event in the video clip. Our model naturally is able to learn the temporal structure of the sequence of frames as well as the sequence model of the generated sentences, i.e. a language model. We evaluate several variants of our model that exploit different visual features on a standard set of YouTube videos and two movie description datasets M-VAD and MPII-MD.