Bootstrapping Stein-type estimators
vendredi 22 février 2013, 9h30 - 10h30
Stein-type estimators are the best known examples of super-efficient estimators that attempt to improve upon the maximum likelihood estimator with respect to the quadratic loss. They are usually useful remedies to multicollinearity, model uncertainty and have found applications in regularization methods for signal recovering and kernel density estimation. Moreover, they do not have the oracle and sparsity properties which are shared by other shrinkage estimators (Hodge, LASSO, SCAD, Ridge) and by many post-model selection estimators based on a consistent model selection procedure. However, Efron’s naive bootstrap is not consistent for some Stein-type estimators. I show that the remedy to the failure of the naive bootstrap is to simply impose the null hypotheses in the bootstrap data generating process. Moreover, I show that the power of tests based on Stein-type estimators that are properly centred and standardised, is greater than the power of usual t-tests. This is relevant for example in life sciences when only a small sample is available due to cost constraints or specimen availability.