Accession Number:

ADA543338

Title:

Evidence Feed Forward Hidden Markov Models for Visual Human Action Classification (Preprint)

Descriptive Note:

Preprint

Corporate Author:

ARMY TANK AUTOMOTIVE RESEARCH DEVELOPMENT AND ENGINEERING CENTER WARREN MI

Report Date:

2011-04-12

Pagination or Media Count:

12.0

Abstract:

Predictions of peoples actions based on visual data is a fairly easy job for people, harder job for animals, and virtually impossible for machines, although many classification systems can predict a limited number of actions. This is due to the many different movements people make while performing the action. Take, for example, a visit to the local store. If we were to sit and watch people walk up and down isles, we would see a unique style of movement from each person. There may be close similarities, but the actual position of the body parts in relation to time would all be unique. People tend to merge these together and look at the overall movement, focusing on only one thing at a time, making an assumption, and validating the assumption. Animals do the same thing but with less a priori knowledge, or less understanding, of the movements. Algorithms that are written for classification of human movement often look at the specific details of movements. It is much harder to generalize an algorithm while testing it on a procedural machine.

Subject Categories:

  • Statistics and Probability

Distribution Statement:

APPROVED FOR PUBLIC RELEASE