Appearance Modelling for 4D Multi-View Representations

English

14/12/2016 - 09:45 Vagia Tsiminaki Grand Amphi de l'INRIA Rhône-Alpes, Montbonnot

The ability to record datasets of subjects undergoing several movements is now a fact. The emergence of multi-view capture systems has yield a tremendous amount of video sequences. The task of capturing spatio-temporal models from real world imagery should arguably benefit from this enormous visual information. We have now all the recourses to achieve realistic representations of dynamic scenes.
Yet, even with the great progress of the geometric modelling, visual quality can be still improved. The appearance aspect has not been fully explored. For the spatio-temporal models to fulfill the demands of real-world applications, high quality appearance representations need also to be provided. This is the topic of my talk. The key observation is that there is redundant visual information that should be optimally exploited.
In particular, I will explain how we can globally represent the appearance and how through a top-down scheme we simulate the generation of the observed images from the common appearance. I will talk about the Super-Resolution principle and how it is  integrated in above image generation model to uncover the hidden visual details. I will show that through  Bayesian inference we provide a temporally coherent, super-resolved, view-independent common appearance representation. Finally, I will address the problem of modelling large dynamic appearance variabilities in cases where the subject undergoes multiple movements.

Directeurs:

  • Mr Edmond Boyer (Senior Research Scientist - Inria Grenoble )
  • Mr Jean Sebastien Franco (Associate Professor - Grenoble INP )

Raporteurs:

  • Mr Hyewon Seo (chargé de recherche - CNRS, University of Strasbourg )
  • Mr Gabriel Brostow (Associate Professor - University of London (UCL) )

Examinateurs:

  • Mme Luce Morin (Professeure - University of Rennes )