Alain Célisse (Lille 1, MODAL INRIA, Laboratoire Paul Painlevé)

Carte non disponible

Early stopping rules in nonparametric regression

vendredi 8 février 2019, 9h30 - 10h30

Salle du conseil, espace Turing


The main focus of this work is on the nonparametric regression by means of reproducing kernels and several iterative learning algorithms such as gradient descent, Tikhonov regularization,…
Using such iterative learning algorithms requires designing a data-driven criterion which tells us when to stop the iterative process. In particular, waiting for the largest possible number of iterations does not necessary lead to the best possible statistical performance!

We start by exploiting the general framework of filter estimators to provide a unified analysis of these different learning algorithms.
This allows us to introduce ongoing early stopping rules such as the ones based on the Minimum Discrepancy Principle, for which we provide a non-asymptotic theoretical guarantee similar to the one derived by Blanchard, Reiss, and Hoffmann (2016).

In order to remedy some deficiencies of the previous stopping rules, we rather motivate and introduce a new stopping rule based on the idea of smoothing the residuals, which turns out to outperform state-of-the-art rules on various examples. The optimality of this rule is then discussed in terms of asymptotic and non-asymptotic results.
Its practical behavior is also assessed by means of empirical experiments.