Statistics Papers

Document Type

Journal Article

Date of this Version


Publication Source

The Annals of Applied Statistics





Start Page


Last Page





We introduce a new estimator for the vector of coefficients β in the linear model y = Xβ+z, where X has dimensions n×p with p possibly larger than n. SLOPE, short for Sorted L-One Penalized Estimation, is the solution to minb∈Rp ½||yXb||2 l2 + λ1|b|(1) + λ2|b|(2) + . . . + λp|b|(p), where λ1 ≥ λ2 ≥⋯≥ λp ≥ 0 and |b|(1) ≥ |b|(2) ≥ ⋯ ≥ |b|(p) are the decreasing absolute values of the entries of b. This is a convex program and we demonstrate a solution algorithm whose computational complexity is roughly comparable to that of classical ℓ1 procedures such as the Lasso. Here, the regularizer is a sorted ℓ1 norm, which penalizes the regression coefficients according to their rank: the higher the rank—i.e. the stronger the signal—the larger the penalty. This is similar to the Benjamini and Hochberg procedure (BH) [9], which compares more significant p-values with more stringent thresholds. One notable choice of the sequence { λi } is given by the BH critical values λBH(i) = z(1 i⋅q/2p), where q ∈ (0,1) and z(α) is the quantile of a standard normal distribution. SLOPE aims to provide finite sample guarantees on the selected model; of special interest is the false discovery rate (FDR), defined as the expected proportion of irrelevant regressors among all selected predictors. Under orthogonal designs, SLOPE with λBH provably controls FDR at level q. Moreover, it also appears to have appreciable inferential properties under more general designs X while having substantial power, as demonstrated in a series of experiments running on both simulated and real data.

Copyright/Permission Statement

The original and published work is available at:


Sparse regression, variable selection, false discovery rate, lasso, sorted l1 penalized estimation (SLOPE)



Date Posted: 27 November 2017

This document has been peer reviewed.