Prior specification for Bayesian deep learning models and regularisation implications.

English

Séminaire Doctorants

14/03/2019 - 15:30 Mme Mariia Vladimirova (Inria Rhône-Alpes) Salle 106 - Batiment IMAG

Neural networks (NNs), and their deep counterparts, have largely been used in many research areas such as image analysis, signal processing, or reinforcement learning, just to name a few. The impressive performance provided by such machine learning approaches has greatly motivated research
that aims at a better understanding the driving mechanisms behind their effectiveness. In particular, the study of the NNs distributional properties through Bayesian analysis has recently gained much attention. 

In this seminar we firstly describe the necessary notations and statistical background for Bayesian NNs. Then we consider its distributional properties and novel theoretical insight on distributions at the units level. Under the assumption of independent and normally distributed weights, we establish that the induced prior distribution on the units before and after activation becomes increasingly heavy-tailed with the depth of the layer. Lastly, we discuss this property in terms of a regularizing mechanism and corroborate it with experimental simulation results.