Risk averse optimisation: Models, Algorithms and Applications in Machine Learning

français

Speciality : Mathématiques Appliquées

30/11/2021 - 14:00 Yassine Laguel (Université Grenoble Alpes) Amphitheater, Maison Jean Kuntzmann

This thesis deals with optimization under uncertainty, which has a long history in operations research and mathematical optimization. This field is currently challenged by applications in artificial intelligence and data science, where risk management has become a crucial issue. In this thesis, we consider nonsmooth optimization problems involving risk measures and coming from statistical learning applications. We pay a special attention to the risk measure called the superquantile (also known as the "Conditional Value at Risk") and we show how, in various contexts, it may enforce robustness for decision-making under uncertainty.

First, we consider convex risk measures admitting a representation in terms of superquantiles. We derive first order oracles with optimal computational complexity. These approximate oracles involve different smoothing techniques for which we propose a unified analysis. We also propose an efficient implementation of these oracles, coupled with a series of classical optimization methods, in an open-source software in Python. We show empirically, on classification and regression tasks, that the predictions obtained are robust to data shifts. 

We then consider chance-constrained optimization problems. We propose a reformulation of these problems in the form of bilevel programs that involve the superquantile. We propose a (semi-) exact penalization for this reformulation, which we treat with a bundle method. We implement our bilevel approach in an open-source python software, which we illustrate on non-convex problems.

Finally, we investigate the use of the superquantile for federated learning. We consider the case of users with heterogeneous data distributions and we show how the superquantile allows for better performances on non-conforming users. We propose an algorithm adapted to the constraints of federated learning, in terms of communications and data privacy. We prove its theoretical convergence in the convex case by controlling the drift induced by local SGD and the dynamic reweighting induced by superquantiles. We also propose an in-depth numerical study of our algorithm and compare its performance with several established baselines.

President:

Nadia Brauner ()

Directors:

  • Jérôme Malick

Raporteurs:

  • Claudia Sagastizábal
  • Joseph Salmon

Examinators:

  • Alexandre d'Aspremont
  • Mert Gürbüzbalaban
  • Panayotis Mertikopoulos