Simple l1-type penalization for multi-task learning
vendredi 21 novembre 2014, 9h30 - 10h30
We propose two new approaches based on simple l1 penalties and dedicated to multi-task learning, whose objective is to jointly estimate several sparse regression models (linear or logistic regression models for instance). The first one reduces to a weighted lasso on a simple transformation of the original data set and is therefore easy to implement and study theoretically. Adaptive versions of our two approaches are further shown to be special cases of the generalized fused lasso with star-graphs used in the penalty. This link with generalized fused lasso enables (i) a description of connections between our proposal and several existing ones, (ii) the implementation of these adaptive versions with available packages (e.g., the FusedLasso R package), and (iii) the derivation of asymptotic oracle properties for these adaptive versions. Preliminary non-asymptotic results for our first approach are also presented under strong sufficient conditions, as corollary of results previously obtained for the standard lasso. From a practical point of view, simulations show that our approaches compare favorably with state-of-the-art competitors under various settings. An illustration on road safety data is finally provided.