8/04/2021 - 14:00 Pierre Gaillard (Inria Grenoble - Rhône-Alpes) Zoom
In this presentation we will examine the framework for online prediction of arbitrary time series. In the latter, a learner sequentially makes predictions of a time series, for which no stochastic assumptions are made. The learner's goal is to minimize his regret, which is the difference between his cumulative error and the cumulative error of the best parameter in hindsight. We will see a general algorithm inspired by empirical online risk minimization that performs logarithmic regrets for many loss functions (such as square or logistic loss). We will then see the importance of improper learning for logistic regression and how our algorithm avoids the exponential constants that are inevitable for proper algorithms. Video of the recording: https://videos.univ-grenoble-alpes.fr/video/18595-ljk-data-seminar-21-04-08-pierre-gaillard/ Ref: Efficient improper learning for online logistic regression. Rémi Jézéquel, Pierre Gaillard, Alessandro Rudi, 2020.