Surface Motion Capture Animation
6/12/2017 - 10:00 Mr Adnane Boukhayma Salle F107 de l'INRIA Rhône-Alpes, Montbonnot
As a new alternative to standard motion capture, 4D surface capture is motivated by the increasing demand from media production for highly realistic 3D content. Such data provides real full shape, appearance and kinematic information of the dynamic object of interest. We address in this work some of the tasks related to the acquisition and the exploitation of 4D data, as obtained through multi-view videos, with an emphasis on corpus of moving subjects. Some of these problems have already received a great deal of interest from the graphics and vision communities, but a number of challenges remain open in this respect. We address namely example based animation synthesis, appearance modelling, semantic motion transfer and variation synthesis. We first propose a method to generate animations using video-based mesh sequences of elementary movements of a shape. New motions that satisfy high-level user-specified constraints are built by recombining and interpolating the frames in the observed mesh sequences. Our method brings local improvement to the synthesis process through optimized interpolated transitions, and global improvement with an optimal organizing structure that we call the essential graph. We then address the problem of building efficient appearance representations of shapes observed from multiple viewpoints and in several movements. We propose a per subject representation that identifies the underlying manifold structure of the appearance information relative to a shape. The resulting representation encodes shape appearance variabilities due to viewpoint and illumination, with Eigen textures, and due to local inaccuracies in the geometric model, with Eigen warps. In addition to providing compact representations, such decompositions also allow for appearance interpolation and appearance completion. We additionally address the problem of transferring motion between captured 4D models. Given 4D training sets for two subjects for which a sparse set of semantically corresponding key-poses are known, our method is able to transfer a newly captured motion from one subject to the other. The method contributes a new transfer model based on non-linear pose and displacement interpolation that builds on Gaussian process regression. Finally, we propose a data based solution for generating variations of captured 4D models, for automatic 4D dataset augmentation and realism improvement. Given a few 4D models representing movements of the same type, our method builds a probabilistic low dimensional embedding of shape poses using Gaussian Process Dynamical Models, and novel variants of motions are obtained by sampling trajectories from this manifold using Monte Carlo Markov Chain. We can synthesise an unlimited number of variations of any of the input movements, and also any blended version of them. The output variations are statistically similar to the input movements but yet slightly different in poses and timings.
Directeurs:
- Mr Edmond Boyer (Directeur de recherche - INRIA )
Raporteurs:
- Mr Adrian Hilton (Professeur - University of Surrey )
- Mr Gerard Pons Moll (Directeur de recherche - Max Planck Institute of Intelligent Systems )
Examinateurs:
- Mme Céline Loscos (Professeure - Université de Reims )
- Mr Jean Sebastien Franco (Professeur - Grenoble INP )