The (Fused) Gromov-Wasserstein framework as a tool for learning on structured data
Seminar Données et Aléatoire Théorie & Applications
9/06/2022 - 14:00 Titouan Vayer (ENS Lyon) Salle 106
Originally introduced by Mémoli, the Gromov-Wassertein (GW) distance has gained increasing interest in recent years for machine learning with structured data. Thanks to its ability to compare graphs of various sizes by finding relations between their nodes, this optimal transport distance is particularly suitable as a loss function in learning problems with graphs. In this presentation, I will show how GW and related metrics define a versatile framework that can deal with both supervised and unsupervised learning on graphs. More precisely, I will introduce the Fused Gromov-Wasserstein (FGW) distance that exploits the feature and structure informations of graphs. Exploiting further on its metric properties, I will discuss applications of FGW for barycenters of multiple graphs and graph reduction. In a second part, I will present how (F)GW can be used as a data fitting term for unsupervised graphs representation learning. By taking inspiration from dictionary learning (DL), a key tool for representation learning that explains the data as linear combination of few basic elements, I will propose a (F)GW based algorithm for online graphs dictionary learning and demonstrate its interest for online graph subspace estimation and tracking. Finally (if I have time) I will introduce a new GNN layer based on FGW for supervised graphs classification.