A relevancy, hierarchical and contextual maximum entropy framework for a data-driven 3D scene generation

Mesfin Dema, Hamed Sari-Sarraf

Research output: Contribution to journalArticle

Abstract

We introduce a novel Maximum Entropy (MaxEnt) framework that can generate 3D scenes by incorporating objects' relevancy, hierarchical and contextual constraints in a unified model. This model is formulated by a Gibbs distribution, under the MaxEnt framework, that can be sampled to generate plausible scenes. Unlike existing approaches, which represent a given scene by a single And-Or graph, the relevancy constraint (defined as the frequency with which a given object exists in the training data) require our approach to sample from multiple And-Or graphs, allowing variability in terms of objects' existence across synthesized scenes. Once an And-Or graph is sampled from the ensemble, the hierarchical constraints are employed to sample the Or-nodes (style variations) and the contextual constraints are subsequently used to enforce the corresponding relations that must be satisfied by the And-nodes. To illustrate the proposed methodology, we use desk scenes that are composed of objects whose existence, styles and arrangements (position and orientation) can vary from one scene to the next. The relevancy, hierarchical and contextual constraints are extracted from a set of training scenes and utilized to generate plausible synthetic scenes that in turn satisfy these constraints. After applying the proposed framework, scenes that are plausible representations of the training examples are automatically generated.

Original languageEnglish
Pages (from-to)2568-2591
Number of pages24
JournalEntropy
Volume16
Issue number5
DOIs
StatePublished - May 2014

    Fingerprint

Keywords

  • 3D scene generation
  • And-Or graphs
  • Maximum entropy

Cite this