Pattern theory for representation and inference of semantic structures in videos

Fillipe D.M. De Souza, Sudeep Sarkar, Anuj Srivastava, Jingyong Su

Research output: Contribution to journalArticle

2 Scopus citations

Abstract

We develop a combinatorial approach to represent and infer semantic interpretations of video contents using tools from Grenander's pattern theory. Semantic structures for video interpretation are formed using generators and bonds, the fundamental units of representation in pattern theory. Generators represent features and ontological items, such as actions and objects, whereas bonds are threads used to connect generators while respecting appropriate constraints. The resulting configurations of partially-connected generators are termed scene interpretations. Our goal is to parse a given video data set into high-probability configurations. The probabilistic models are imposed using energies that have contributions from both data (classification scores) and prior information (ontological constraints, co-occurrence frequencies, etc). The search for optimal configurations is based on an MCMC, simulated-annealing algorithm that uses simple moves to propose configuration changes and to accept/reject them according to the posterior energy. In contrast to current graphical methods, this framework does not preselect a neighborhood structure but tries to infer it from the data. The proposed framework is able to obtain 20% higher classification rates, compared to a purely machine learning-based baseline, despite artificial insertion of low-level processing errors. In an uncontrolled scenario, video interpretation performance rates are found to be double that of the baseline.

Original languageEnglish
Pages (from-to)41-51
Number of pages11
JournalPattern Recognition Letters
Volume72
DOIs
StatePublished - Mar 1 2016

    Fingerprint

Keywords

  • Activity recognition
  • Compositional approach
  • Graphical methods
  • Pattern theory
  • Video interpretation

Cite this