Learning a Graph and Importance of its Vertices


Séminaire Données et Aléatoire Théorie & Applications

16/05/2024 - 14:00 Benjamin Girault Salle 106

Graph learning is the topic of discovering the structure between entities of a dataset, with the goal of using this structure to understand the relation between the entities, or process additional data. One of key approach is to identify the Laplacian of the graph with a precision matrix of the data. This involves inverting the covariance matrix of data, of which only an estimate can be generally computed. Instead, using a multivariate Gaussian assumption, the classical approach perform maximum likelihood estimation of the precision matrix, and regularize the obtained cost function using a l1 sparse penality: this is the graphical Lasso method. Unfortunately, including constraints to obtain a valid graph is not satisfactory, with the literature showing that the l1 sparse penality decreases sparsity of the graph instead of increasing it. 

In this work, we alter the initial assumption identifying the Laplacian of the graph with the precision matrix. Instead, we identify the precision matrix to the Laplacian matrix plus a diagonal matrix of positive vertex importances. We show three key benefits of such an approach: (i) this can be identified to learning a generalized Laplacian, (ii) experimentally, the graph learnt are much sparser, without the need for a regularization, and (iii) we keep a spectral interpretation of the statistics of the signals, albeit with a different power spectrum.