Nonsmooth Optimization for Statistical Learning with Structured Matrix Regularization

français

Speciality : Mathématiques Appliquées

23/06/2017 - 14:30 Mr Federico Pierucci Grand Amphi de l'INRIA Rhône-Alpes, Montbonnot

Keywords :
  • conditional gradient
  • smoothing
  • nuclear-norm
  • machine learning
  • mathematical optimization
   Training machine learning methods boils down to solving optimization problems whose objective functions often decomposes into two parts: a) the empirical risk, built upon the loss function, whose shape is determined by the performance metric and the noise assumptions; b) the regularization penalty, built upon a norm, or a gauge function, whose structure is determined by the prior information available for the problem at hand. Common loss functions, such as the hinge loss for binary classification, or more advanced loss functions, such as the one arising in classification with reject option, are non-smooth. Sparse regularization penalties such as the (vector) L1-penalty, or the (matrix) nuclear-norm penalty, are also nonsmooth. The goal of this thesis is to study doubly non-smooth learning problems (with non-smooth loss functions and non-smooth regularization penalties) and first-order optimization algorithms that leverage the composite structure of non-smooth objectives.

In the first chapter, we introduce new regularization penalties, called the group Schatten norms, to generalize the standard Schatten norms to block-structured matrices. We establish the main properties of the group Schatten norms using tools from convex analysis and linear algebra; we retrieve in particular some convex envelope properties. We discuss several potential applications of the group nuclear-norm, in collaborative filtering, database 
 compression, multi-label image tagging. 
In the second chapter, we present a survey of smoothing techniques that allow us to use first-order optimization algorithms originally designed for learning problems with nonsmooth loss. We also show  how smoothing can be used on the loss function corresponding to the top-k accuracy, used for ranking and multi-class classification problems. We outline some first-order algorithms that can be used in combination with the smoothing technique: i) conditional gradient algorithms; ii) proximal gradient algorithms; iii) incremental gradient algorithms. 
In the third chapter, we study further conditional gradient algorithms for solving doubly non-smooth optimization problems. We show that an adaptive smoothing combined with the standard conditional gradient algorithm gives birth to new conditional gradient algorithms having the expected theoretical convergence guarantees. We present promising experimental results in collaborative filtering for movie recommendation and image categorization.

President:

Mr Massih-Reza Amini (Université Grenoble Alpes)

Directors:

  • Mr Anatoli Iouditski (Université Grenoble Alpes )
  • Mr Jérôme Malick (CNRS )
  • Mr Zaid Harchaoui (University of Washington )

Raporteurs:

  • Mr Stéphane Chrétien (National Physical Laboratory, Teddington, Middlesex, UK )
  • Mr Alexander Nazin (Institute of Control Sciences RAS, Moscow, Russia )

Examinators:

  • Mr Joseph Salmon (Telecom ParisTech, France, )
  • Mme Nelly Pustelnik (CNRS, ENS Lyon )