Capacités audiovisuelles en robot humanoïde NAO

français

Speciality : Mathématiques et Informatique

14/06/2013 - 10:00 Mr Jordi Sanchez-Riera (Université de Grenoble) Salle A104 de l'INRIA Rhône-Alpes, Montbonnot

Keywords :
  • reconnaissance d'actions
  • fusion audiovisuelle
In this thesis we plan to investigate the complementarity of auditory and visual sensory data for building a high-level interpretation of a scene. The audiovisual (AV) input received by the robot is a function of both the external environment and of the robot's actual localization which is closely related to its actions. Current research in AV scene analysis has tended to focus on fixed perceivers. However, psychophysical evidence suggests that humans use small head and body movements, in order to optimize the location of their ears with respect to the source. Similarly, by walking or turning, the robot may be able to improve the incoming visual data. For example, in binocular perception, it is desirable to reduce the viewing distance to an object of interest. This allows the 3D structure of the object to be analyzed at a higher depth-resolution.

Directors:

  • Mr Radu Horaud (Directeur de Recherche - INRIA )

Raporteurs:

  • Mr Cristian Sminchisescu (Professeur - Lun University )
  • Mr Vaclav Hlavac (Professuer - CTU Prague )

Examinators:

  • Mr Rodolphe Gelin (Doctor - Aldebaran Robotics )
  • Mr Peter Sturm (Professeur - INRIA )