Date of this Version
The Annals of Statistics
Although the standard formulations of prediction problems involve fully-observed and noiseless data drawn in an i.i.d. manner, many applications involve noisy and/or missing data, possibly involving dependence, as well. We study these issues in the context of high-dimensional sparse linear regression, and propose novel estimators for the cases of noisy, missing and/or dependent data. Many standard approaches to noisy or missing data, such as those using the EM algorithm, lead to optimization problems that are inherently nonconvex, and it is difficult to establish theoretical guarantees on practical algorithms. While our approach also involves optimizing nonconvex programs, we are able to both analyze the statistical error associated with any global optimum, and more surprisingly, to prove that a simple algorithm based on projected gradient descent will converge in polynomial time to a small neighborhood of the set of all global minimizers. On the statistical side, we provide nonasymptotic bounds that hold with high probability for the cases of noisy, missing and/or dependent data. On the computational side, we prove that under the same types of conditions required for statistical consistency, the projected gradient descent algorithm is guaranteed to converge at a geometric rate to a near-global minimizer. We illustrate these theoretical predictions with simulations, showing close agreement with the predicted scalings.
high-dimensional statistics, missing data, nonconvexity, regularization, sparse linear regression, M-estimation
Loh, P., & Wainwright, M. J. (2012). High-Dimensional Regression With Noisy and Missing Data: Provable Guarantees With Nonconvexity. The Annals of Statistics, 40 (3), 1637-1664. http://dx.doi.org/10.1214/12-AOS1018
Date Posted: 27 November 2017
This document has been peer reviewed.