17/03/2022 - 14:00 Thomas Moreau (Inria Saclay) Salle 106
In recent years, bi-level optimization -- solving an optimization problem that depends on the results of another optimization problem -- has raised much interest in the machine learning community. A core question for such problems is the estimation of the gradient when the inner problem is not solved exactly. While some fundamental results exist, there is still a gap between what is used in practice and our understanding of the theoretical behavior of such problems. In this talk, I will review different use cases where this type of problem arises, such as hyper-parameter optimization or dictionary learning and give a view of recent advances on how to solve them efficiently.