Accession Number : AD1016844


Title :   Can Humans Fly Action Understanding with Multiple Classes of Actors


Descriptive Note : Conference Paper


Corporate Author : State University of New York (SUNY) Buffalo United States


Personal Author(s) : Xu,Chenliang ; Hsieh,Shao-Hang ; Xiong,Caiming ; Corso,Jason J


Full Text : https://apps.dtic.mil/dtic/tr/fulltext/u2/1016844.pdf


Report Date : 08 Jun 2015


Pagination or Media Count : 12


Abstract : Can humans fly? Emphatically no. Can cars eat? Again, absolutely not. Yet, these absurd inferences result from the current disregard for particular types of actors in action understanding. There is no work we know of on simultaneously inferring actors and actions in the video, not to mention a dataset to experiment with. Our paper hence marks the first effort in the computer vision community to jointly consider various types of actors undergoing various actions. To start with the problem, we collect a dataset of 3782 videos from YouTube and label both pixel-level actors and actions in each video. We formulate the general actor-action understanding problem and instantiate it at various granularities: both video-level single- and multiple-label actor-action recognition and pixel-level actor-action semantic segmentation. Our experiments demonstrate that inference jointly over actors and actions outperforms inference independently over them, and hence concludes our argument of the value of explicit consideration of various actors in comprehensive action understanding.


Descriptors :   joints , Semantics , video signals , Computer vision


Distribution Statement : APPROVED FOR PUBLIC RELEASE