Ordinary least squares Monte-Carlo

français

Seminar Probabilités & Statistique

2/03/2017 - 14:00 François PORTIER (TELECOM ParisTech) Salle 106 - Batiment IMAG

The use of control variates is a well-known method to reduce the variance of the naive Monte-Carlo estimator of an integral. A formulation of the method is presented based on a regression model where the dependent variable is the function to integrate and the predictors are elements of a linear space composed by test functions with known integral, assumed to be zero without loss of generality. It is shown that the ordinary least-squares estimator for the intercept is equal to a certain control-variate enhanced Monte-Carlo estimator. The asymptotic variance of the estimator is equal to the variance of the residual variable in the regression model. More importantly, it is demonstrated that if the number of predictors is allowed to grow to infinity with the number of Monte Carlo replicates, convergence takes place at a faster rate than for ordinary Monte Carlo integration, the limit distribution of the standardized errors still being Gaussian. In addition, the estimator of the residual variance in the regression model is a consistent estimator for the asymptotic variance. The performance of the method in finite samples is investigated through a simulation study for various choices of the test functions and in various dimensions.

This is a joint work with Johan Segers.