Regularization and Variable Selection Via the Elastic Net (original) (raw)

Summary

We propose the elastic net, a new regularization and variable selection method. Real world data and a simulation study show that the elastic net often outperforms the lasso, while enjoying a similar sparsity of representation. In addition, the elastic net encourages a grouping effect, where strongly correlated predictors tend to be in or out of the model together. The elastic net is particularly useful when the number of predictors (p) is much bigger than the number of observations (n). By contrast, the lasso is not a very satisfactory variable selection method in the p_≫_n case. An algorithm called LARS-EN is proposed for computing elastic net regularization paths efficiently, much like algorithm LARS does for the lasso.

1. Introduction and motivation

We consider the usual linear regression model: given p predictors x1,…,xp, the response y is predicted by

A model fitting procedure produces the vector of coefficients β^=(β^0,…,β^p)⁠. For example, the ordinary least squares (OLS) estimates are obtained by minimizing the residual sum of squares. The criteria for evaluating the quality of a model will differ according to the circumstances. Typically the following two aspects are important:

It is well known that OLS often does poorly in both prediction and interpretation. Penalization techniques have been proposed to improve OLS. For example, ridge regression (Hoerl and Kennard, 1988) minimizes the residual sum of squares subject to a bound on the _L_2-norm of the coefficients. As a continuous shrinkage method, ridge regression achieves its better prediction performance through a bias–variance trade-off. However, ridge regression cannot produce a parsimonious model, for it always keeps all the predictors in the model. Best subset selection in contrast produces a sparse model, but it is extremely variable because of its inherent discreteness, as addressed by Breiman (1996).

A promising technique called the lasso was proposed by Tibshirani (1996). The lasso is a penalized least squares method imposing an _L_1-penalty on the regression coefficients. Owing to the nature of the _L_1-penalty, the lasso does both continuous shrinkage and automatic vari-able selection simultaneously. Tibshirani (1996) and Fu (1998) compared the prediction performance of the lasso, ridge and bridge regression (Frank and Friedman, 1993) and found that none of them uniformly dominates the other two. However, as variable selection becomes increasingly important in modern data analysis, the lasso is much more appealing owing to its sparse representation.

Although the lasso has shown success in many situations, it has some limitations. Consider the following three scenarios.

Scenarios (a) and (b) make the lasso an inappropriate variable selection method in some situations. We illustrate our points by considering the gene selection problem in microarray data analysis. A typical microarray data set has many thousands of predictors (genes) and often fewer than 100 samples. For those genes sharing the same biological ‘pathway’, the correlations between them can be high (Segal and Conklin, 2003). We think of those genes as forming a group. The ideal gene selection method should be able to do two things: eliminate the trivial genes and automatically include whole groups into the model once one gene among them is selected (‘grouped selection’). For this kind of p_≫_n and grouped variables situation, the lasso is not the ideal method, because it can only select at most n variables out of p candidates (Efron et al., 2004), and it lacks the ability to reveal the grouping information. As for prediction performance, scenario (c) is not rare in regression problems. So it is possible to strengthen further the prediction power of the lasso.

Our goal is to find a new method that works as well as the lasso whenever the lasso does the best, and can fix the problems that were highlighted above, i.e. it should mimic the ideal variable selection method in scenarios (a) and (b), especially with microarray data, and it should deliver better prediction performance than the lasso in scenario (c).

In this paper we propose a new regularization technique which we call the elastic net. Similar to the lasso, the elastic net simultaneously does automatic variable selection and continuous shrinkage, and it can select groups of correlated variables. It is like a stretchable fishing net that retains ‘all the big fish’. Simulation studies and real data examples show that the elastic net often outperforms the lasso in terms of prediction accuracy.

In Section 2 we define the naïve elastic net, which is a penalized least squares method using a novel elastic net penalty. We discuss the grouping effect that is caused by the elastic net penalty. In Section 3, we show that this naïve procedure tends to overshrink in regression problems. We then introduce the elastic net, which corrects this problem. An efficient algorithm LARS-EN is proposed for computing the entire elastic net regularization paths with the computational effort of a single OLS fit. Prostate cancer data are used to illustrate our methodology in Section 4, and simulation results comparing the lasso and the elastic net are presented in Section 5. Section 6 shows an application of the elastic net to classification and gene selection in a leukaemia microarray problem.

2. Naïve elastic net

2.1. Definition

Suppose that the data set has n observations with p predictors. Let y=(_y_1,…,y n)T be the response and X=(x1|…|xp) be the model matrix, where x_j_=(x_1_j,…,x nj)T, _j_=1,…,p, are the predictors. After a location and scale transformation, we can assume that the response is centred and the predictors are standardized,

∑​i=1nyi=0,∑​i=1nxij=0 and ∑​i=1nxij2=1, for j=1,2,…,p.

(2)

For any fixed non-negative _λ_1 and _λ_2, we define the naïve elastic net criterion

L(λ1,λ2,β)=|y−Xβ|2+λ2|β|2+λ1|β|1,

(3)

where

|β|2=∑​j=1pβj2,|β|1=∑​j=1p|βj|.

The naïve elastic net estimator β^ is the minimizer of equation (3):

This procedure can be viewed as a penalized least squares method. Let _α_=_λ_2/(_λ_1+_λ_2); then solving β^ in equation (3) is equivalent to the optimization problem

β^=argminβ|y−Xβ|2, subject to (1−α)|β|1+α|β|2⩽t for some t.

(5)

We call the function (1−α)|β|1+α|β|2 the elastic net penalty, which is a convex combination of the lasso and ridge penalty. When _α_=1, the naïve elastic net becomes simple ridge regression. In this paper, we consider only α<1. For all _α_ ∈ [0,1), the elastic net penalty function is singular (without first derivative) at 0 and it is strictly convex for all _α_>0, thus having the characteristics of both the lasso and ridge regression. Note that the lasso penalty (_α_=0) is convex but not strictly convex. These arguments can be seen clearly from Fig. 1.

Two-dimensional contour plots (level 1) (·-·-·-, shape of the ridge penalty; - - - - - -, contour of the lasso penalty; ———, contour of the elastic net penalty with α=0.5): we see that singularities at the vertices and the edges are strictly convex; the strength of convexity varies with α

Fig. 1

Two-dimensional contour plots (level 1) (·-·-·-, shape of the ridge penalty; - - - - - -, contour of the lasso penalty; ———, contour of the elastic net penalty with _α_=0.5): we see that singularities at the vertices and the edges are strictly convex; the strength of convexity varies with α

2.2. Solution

We now develop a method to solve the naïve elastic net problem efficiently. It turns out that minimizing equation (3) is equivalent to a lasso-type optimization problem. This fact implies that the naïve elastic net also enjoys the computational advantage of the lasso.

Lemma 1. Given data set (y,X) and (_λ_1,_λ_2), define an artificial data set (y*,X*) by

X(n+p)×p∗=(1+λ2)−1/2(X√λ2I), y(n+p)∗=(y0).

Let _γ_=_λ_1/√(1+_λ_2) and β*=√(1+_λ_2)β. Then the naïve elastic net criterion can be written as

L(γ,β)=L(γ,β∗)=|y∗−X∗β∗|2+γ|β∗|1.

Let

then

The proof is just simple algebra, which we omit. Lemma 1 says that we can transform the naïve elastic net problem into an equivalent lasso problem on augmented data. Note that the sample size in the augmented problem is n+p and X* has rank p, which means that the naïve elastic net can potentially select all p predictors in all situations. This important property overcomes the limitations of the lasso that were described in scenario (a). Lemma 1 also shows that the naïve elastic net can perform an automatic variable selection in a fashion similar to the lasso. In the next section we show that the naïve elastic net has the ability of selecting ‘grouped’ variables, a property that is not shared by the lasso.

In the case of an orthogonal design, it is straightforward to show that with parameters (_λ_1,_λ_2) the naïve elastic net solution is

β^i( naïve elastic net )=(|β^i(OLS)|−λ1/2)+1+λ2sgn{β^i(OLS)},

(6)

where β^(OLS)=XTy and z+ denotes the positive part, which is z if _z_>0 and 0 otherwise. The solution of ridge regression with parameter _λ_2 is given by β^( ridge )=β^(OLS)/(1+λ2)⁠, and the lasso solution with parameter _λ_1 is

β^i( lasso )=(|β^i(OLS)|−λ1/2)+sgn{β^i(OLS)}.

Fig. 2 shows the operational characteristics of the three penalization methods in an orthogonal design, where the naïve elastic net can be viewed as a two-stage procedure: a ridge-type direct shrinkage followed by a lasso-type thresholding.

Exact solutions for the lasso (- - - - - -), ridge regression (·-·-·-) and the naïve elastic net (——) in an orthogonal design (·······, OLS): the shrinkage parameters are λ1=2 and λ2=1

Fig. 2

Exact solutions for the lasso (- - - - - -), ridge regression (·-·-·-) and the naïve elastic net (——) in an orthogonal design (·······, OLS): the shrinkage parameters are _λ_1=2 and _λ_2=1

2.3. The grouping effect

In the ‘large p, small _n_’ problem (West et al., 2001), the ‘grouped variables’ situation is a particularly important concern, which has been addressed many times in the literature. For example, principal component analysis has been used to construct methods for finding a set of highly correlated genes in Hastie et al. (2000) and Díaz-Uriarte (2003). Tree harvesting (Hastie et al., 2003) uses supervised learning methods to select groups of predictive genes found by hierarchical clustering. Using an algorithmic approach, Dettling and Bühlmann (2004) performed the clustering and supervised learning together. A careful study by Segal and Conklin (2003) strongly motivates the use of a regularized regression procedure to find the grouped genes. We consider the generic penalization method

where J(·) is positive valued for **β**≠0.

Qualitatively speaking, a regression method exhibits the grouping effect if the regression coefficients of a group of highly correlated variables tend to be equal (up to a change of sign if negatively correlated). In particular, in the extreme situation where some variables are exactly identical, the regression method should assign identical coefficients to the identical variables.

Lemma 2. Assume that x_i_=xj, i,j ∈ {1,…,p}.

Lemma 2 shows a clear distinction between strictly convex penalty functions and the lasso penalty. Strict convexity guarantees the grouping effect in the extreme situation with identical predictors. In contrast the lasso does not even have a unique solution. The elastic net penalty with _λ_2>0 is strictly convex, thus enjoying the property in assertion (1).

Theorem 1. Given data (y,X) and parameters (_λ_1,_λ_2), the response y is centred and the predictors X are standardized. Let β^(λ1,λ2) be the naïve elastic net estimate. Suppose that β^i(λ1,λ2)β^j(λ1,λ2)>0⁠. Define

Dλ1,λ2(i,j)=1|y|1|β^i(λ1,λ2)−β^j(λ1,λ2)|;

then

Dλ1,λ2(i,j)⩽1λ2√{2(1−ρ)},

where ρ=XiTXj⁠, the sample correlation.

The unitless quantity D _λ_1,_λ_2(i,j) describes the difference between the coefficient paths of predictors i and j. If xi and xj are highly correlated, i.e. _ρ_≐1 (if _ρ_≐−1 then consider −xj), theorem 1 says that the difference between the coefficient paths of predictor i and predictor j is almost 0. The upper bound in the above inequality provides a quantitative description for the grouping effect of the naïve elastic net.

The lasso does not have the grouping effect. Scenario (b) in Section 1 occurs frequently in practice. A theoretical explanation is given in Efron et al. (2004). For a simpler illustration, let us consider the linear model with _p_=2. Tibshirani (1996) gave the explicit expression for (β^1,β^2)⁠, from which we easily obtain that |β^1−β^2|=|cos(θ)|⁠, where θ is the angle between y and x1−x2. It is easy to construct examples such that _ρ_=corr(x1,x2)→1 but cos (θ) does not vanish.

2.4. Bayesian connections and the L _q_-penalty

Bridge regression (Frank and Friedman, 1993; Fu, 1998) has J(β)=|β|qq=Σj=1p|βj|q in equation (7), which is a generalization of both the lasso (_q_=1) and ridge regression (_q_=2). The bridge estimator can be viewed as the Bayes posterior mode under the prior

pλ,q(β)=C(λ,q)exp(−λ|β|qq).

(8)

Ridge regression (_q_=2) corresponds to a Gaussian prior and the lasso (_q_=1) a Laplacian (or double-exponential) prior. The elastic net penalty corresponds to a new prior given by

pλ,α(β)=C(λ,α)exp[−λ{α|β|2+(1−α)|β|1}],

(9)

a compromise between the Gaussian and Laplacian priors. Although bridge regression with 1<q<2 will have many similarities with the elastic net, there is a fundamental difference between them. The elastic net produces sparse solutions, whereas bridge regression does not. Fan and Li (2001) proved that, in the Lq(q⩾1) penalty family, only the lasso penalty (_q_=1) can produce a sparse solution. Bridge regression (1<q<2) always keeps all predictors in the model, as does ridge regression. Since automatic variable selection via penalization is a primary objective of this paper, L q (1<q<2) penalization is not a candidate.

3. Elastic net

3.1. Deficiency of the naïve elastic net

As an automatic variable selection method, the naïve elastic net overcomes the limitations of the lasso in scenarios (a) and (b). However, empirical evidence (see Sections 4 and 5) shows that the naïve elastic net does not perform satisfactorily unless it is very close to either ridge regression or the lasso. This is why we call it naïve.

In the regression prediction setting, an accurate penalization method achieves good prediction performance through the bias–variance trade-off. The naïve elastic net estimator is a two-stage procedure: for each fixed _λ_2 we first find the ridge regression coefficients, and then we do the lasso-type shrinkage along the lasso coefficient solution paths. It appears to incur a double amount of shrinkage. Double shrinkage does not help to reduce the variances much and introduces unnecessary extra bias, compared with pure lasso or ridge shrinkage. In the next section we improve the prediction performance of the naïve elastic net by correcting this double shrinkage.

3.2. The elastic net estimate

We follow the notation in Section 2.2. Given data (y,X), penalty parameter (_λ_1,_λ_2) and augmented data (y*,X*), the naïve elastic net solves a lasso-type problem

β^∗=argminβ∗|y∗−X∗β∗|2+λ1√(1+λ2)|β∗|1.

(10)

The elastic net (corrected) estimates β^ are defined by

β^( elastic net )=√(1+λ2)β^∗.

(11)

Recall that β^( naïe elastic net )={1/√(1+λ2)}β^∗⁠; thus

β^( elastic net )=(1+λ2)β^ (naïve elastic net ).

(12)

Hence the elastic net coefficient is a rescaled naïve elastic net coefficient.

Such a scaling transformation preserves the variable selection property of the naïve elastic net and is the simplest way to undo shrinkage. Hence all the good properties of the naïve elastic net that were described in Section 2 hold for the elastic net. Empirically we have found that the elastic net performs very well when compared with the lasso and ridge regression.

We have another justification for choosing 1+_λ_2 as the scaling factor. Consider the exact solution of the naïve elastic net when the predictors are orthogonal. The lasso is known to be minimax optimal (Donoho et al., 1995) in this case, which implies that the naïve elastic net is not optimal. After scaling by 1+_λ_2, the elastic net automatically achieves minimax optimality.

A strong motivation for the (1+_λ_2)-rescaling comes from a decomposition of the ridge operator. Since the predictors X are standardized, we have

XTX=(1ρ12.ρ1p1.⋅1ρp−1,p1)p×p,

where ρ i,j is sample correlation. Ridge estimates with parameter _λ_2 are given by β^( ridge )=Ry with

We can rewrite R as

R=11+λ2R∗=11+λ2(1ρ121+λ2 . ρ1p1+λ21..1ρp−1,p1+λ21)−1XT.

(13)

R* is like the usual OLS operator except that the correlations are shrunk by the factor 1/(1+_λ_2), which we call decorrelation. Hence from equation (13) we can interpret the ridge operator as decorrelation followed by direct scaling shrinkage.

This decomposition suggests that the grouping effect of ridge regression is caused by the decorrelation step. When we combine the grouping effect of ridge regression with the lasso, the direct 1/(1+_λ_2) shrinkage step is not needed and is removed by rescaling. Although ridge regression requires 1/(1+_λ_2) shrinkage to control the estimation variance effectively, in our new method, we can rely on the lasso shrinkage to control the variance and to obtain sparsity.

From now on, let β^ stand for β^( elastic net) ⁠. The next theorem gives another presentation of the elastic net, in which the decorrelation argument is more explicit.

Theorem 2. Given data (y,X) and (_λ_1,_λ_2), then the elastic net estimates β^ are given by

β^=argminββT(XTX+λ2I1+λ2)β−2yTXβ+λ1|β|1.

(14)

It is easy to see that

β^( lasso )=argminββT(XTX)β−2yTXβ+λ1|β|1.

(15)

Hence theorem 2 interprets the elastic net as a stabilized version of the lasso. Note that Σ^=XTX is a sample version of the correlation matrix Σ and

with _γ_=_λ_2/(1+_λ_2) shrinks Σ^ towards the identity matrix. Together equations (14) and (15) say that rescaling after the elastic net penalization is mathematically equivalent to replacing Σ^ with its shrunken version in the lasso. In linear discriminant analysis, the prediction accuracy can often be improved by replacing Σ^ by a shrunken estimate (Friedman, 1989; Hastie et al., 2001). Likewise we improve the lasso by regularizing Σ^ in equation (15).

3.3. Connections with univariate soft thresholding

The lasso is a special case of the elastic net with _λ_2=0. The other interesting special case of the elastic net emerges when _λ_2→∞. By theorem 2, β^→β^(∞) as _λ_2→∞, where

β^(∞)=argminββTβ−2yTXβ+λ1|β|1.

β^(∞) has a simple closed form

β^(∞)i=(|yTxi|−λ12)+sgn(yTxi), i=1,2,…,p.

(16)

Observe that yTxi is the univariate regression coefficient of the _i_th predictor and β^(∞) are the estimates by applying soft thresholding on univariate regression coefficients; thus equation (16) is called univariate soft thresholding (UST).

UST totally ignores the dependence between predictors and treats them as independent variables. Although this may be considered illegitimate, UST and its variants are used in other methods such as significance analysis of microarrays (Tusher et al., 2001) and the nearest shrunken centroids classifier (Tibshirani et al., 2002), and have shown good empirical performance. The elastic net naturally bridges the lasso and UST.

3.4. Computation: the algorithm LARS-EN

We propose an efficient algorithm called LARS-EN to solve the elastic net efficiently, which is based on the recently proposed algorithm LARS of Efron et al. (2004). They proved that, starting from zero, the lasso solution paths grow piecewise linearly in a predictable way. They proposed a new algorithm called LARS to solve the entire lasso solution path efficiently by using the same order of computations as a single OLS fit. By lemma 1, for each fixed _λ_2 the elastic net problem is equivalent to a lasso problem on the augmented data set. So algorithm LARS can be directly used to create the entire elastic net solution path efficiently with the computational efforts of a single OLS fit. Note, however, that for p_≫_n the augmented data set has p+_n_‘observations’ and p variables, which can slow the computation considerably.

We further facilitate the computation by taking advantage of the sparse structure of X*, which is crucial in the p_≫_n case. In detail, as outlined in Efron et al. (2004), at the _k_th step we need to invert the matrix GAk=XAk∗TXAk∗⁠, where A k is the active variable set. This is done efficiently by updating or downdating the Cholesky factorization of GA _k_−1 that is found at the previous step. Note that

for any index set A, so it amounts to updating or downdating the Cholesky factorization of XAk−1TXAk−1+λ2I⁠. It turns out that we can use a simple formula to update the Cholesky factorization of XAk−1TXAk−1+λ2I⁠, which is very similar to the formula that is used for updating the Cholesky factorization of XAk−1TXAk−1 (Golub and Van Loan, 1983). The exact same downdating function can be used for downdating the Cholesky factorization of XAk−1TXAk−1+λ2I⁠. In addition, when calculating the equiangular vector and the inner products of the non-active predictors with the current residuals, we can save computations by using the simple fact that Xj∗ has _p_−1 zero elements. In a word, we do not explicitly use X* to compute all the quantities in algorithm LARS. It is also economical to record only the non-zero coefficients and the active variables set at each LARS-EN step.

Algorithm LARS-EN sequentially updates the elastic net fits. In the p_≫_n case, such as with microarray data, it is not necessary to run the algorithm to the end (early stopping). Real data and simulated computational experiments show that the optimal results are achieved at an early stage of algorithm LARS-EN. If we stop the algorithm after m steps, then it requires O(_m_3+_pm_2) operations.

3.5. Choice of tuning parameters

We now discuss how to choose the type and value of the tuning parameter in the elastic net. Although we defined the elastic net by using (_λ_1,_λ_2), it is not the only choice as the tuning parameter. In the lasso, the conventional tuning parameter is the _L_1-norm of the coefficients (t) or the fraction of the _L_1-norm (s). By the proportional relationship between β^ and β^∗⁠, we can also use (_λ_2,s) or (_λ_2,t) to parameterize the elastic net. The advantage of using (_λ_2,s) is that s is always valued within [0,1]. In algorithm LARS the lasso is des-cribed as a forward stagewise additive fitting procedure and shown to be (almost) identical to _ɛ_-_L_2 boosting (Efron et al., 2004). This new view adopts the number of steps k of algorithm LARS as a tuning parameter for the lasso. For each fixed _λ_2, the elastic net is solved by our algorithm LARS-EN; hence similarly we can use the number of the LARS-EN steps (k) as the second tuning parameter besides _λ_2. The above three types of tuning parameter corres-pond to three ways to interpret the piecewise elastic net or lasso solution paths as shown in Fig. 3.

(a) Lasso estimates as a function of s and (b) elastic net estimates (λ=1000) as a function of s: both estimates are piecewise linear, which is a key property of our efficient algorithm; the solution paths also show that the elastic net is identical to univariate soft thresholding in this example (⋮, final model selected)

Fig. 3

(a) Lasso estimates as a function of s and (b) elastic net estimates (_λ_=1000) as a function of s: both estimates are piecewise linear, which is a key property of our efficient algorithm; the solution paths also show that the elastic net is identical to univariate soft thresholding in this example (⋮, final model selected)

There are well-established methods for choosing such tuning parameters (Hastie et al. (2001), chapter 7). If only training data are available, tenfold cross-validation (CV) is a popular method for estimating the prediction error and comparing different models, and we use it here. Note that there are two tuning parameters in the elastic net, so we need to cross-validate on a two-dimensional surface. Typically we first pick a (relatively small) grid of values for _λ_2, say (0,0.01,0.1,1,10,100). Then, for each _λ_2, algorithm LARS-EN produces the entire solution path of the elastic net. The other tuning parameter (_λ_1, s or k) is selected by tenfold CV. The chosen _λ_2 is the one giving the smallest CV error.

For each _λ_2, the computational cost of tenfold CV is the same as 10 OLS fits. Thus two-dimensional CV is computationally thrifty in the usual _n_>p setting. In the p_≫_n case, the cost grows linearly with p and is still manageable. Practically, early stopping is used to ease the computational burden. For example, suppose that _n_=30 and _p_=5000; if we do not want more than 200 variables in the final model, we may stop algorithm LARS-EN after 500 steps and consider only the best k within 500.

From now on we drop the subscript of _λ_2 if s or k is the other parameter.

4. Prostate cancer example

The data in this example come from a study of prostate cancer (Stamey et al., 1989). The predictors are eight clinical measures: log(cancer volume) (lcavol), log(prostate weight) (lweight), age, the logarithm of the amount of benign prostatic hyperplasia (lbph), seminal vesicle invasion (svi), log(capsular penetration) (lcp), Gleason score (gleason) and percentage Gleason score 4 or 5 (pgg45). The response is the logarithm of prostate-specific antigen (lpsa).

OLS, ridge regression, the lasso, the naïve elastic net and the elastic net were all applied to these data. The prostate cancer data were divided into two parts: a training set with 67 observations and a test set with 30 observations. Model fitting and tuning parameter selection by tenfold CV were carried out on the training data. We then compared the performance of those methods by computing their prediction mean-squared error on the test data.

Table 1 clearly shows that the elastic net is the winner among all the competitors in terms of both prediction accuracy and sparsity. OLS is the worst method. The naïve elastic net performs identically to ridge regression in this example and fails to do variable selection. The lasso includes lcavol, lweight lbph, svi and pgg45 in the final model, whereas the elastic net selects lcavol, lweight, svi, lcp and pgg45. The prediction error of the elastic net is about 24% lower than that of the lasso. We also see in this case that the elastic net is actually UST, because the λ selected is very big (1000). This can be considered as a piece of empirical evidence supporting UST. Fig. 3 displays the lasso and the elastic net solution paths.

Table 1

Prostate cancer data: comparing different methods

Method Parameter(s) Test mean-squared error Variables selected
OLS 0.586 (0.184) All
Ridge regression _λ_=1 0.566 (0.188) All
Lasso _s_=0.39 0.499 (0.161) (1,2,4,5,8)
Naïve elastic net _λ_=1, _s_=1 0.566 (0.188) All
Elastic net _λ_=1000, _s_=0.26 0.381 (0.105) (1,2,5,6,8)
Method Parameter(s) Test mean-squared error Variables selected
OLS 0.586 (0.184) All
Ridge regression _λ_=1 0.566 (0.188) All
Lasso _s_=0.39 0.499 (0.161) (1,2,4,5,8)
Naïve elastic net _λ_=1, _s_=1 0.566 (0.188) All
Elastic net _λ_=1000, _s_=0.26 0.381 (0.105) (1,2,5,6,8)

Table 1

Prostate cancer data: comparing different methods

Method Parameter(s) Test mean-squared error Variables selected
OLS 0.586 (0.184) All
Ridge regression _λ_=1 0.566 (0.188) All
Lasso _s_=0.39 0.499 (0.161) (1,2,4,5,8)
Naïve elastic net _λ_=1, _s_=1 0.566 (0.188) All
Elastic net _λ_=1000, _s_=0.26 0.381 (0.105) (1,2,5,6,8)
Method Parameter(s) Test mean-squared error Variables selected
OLS 0.586 (0.184) All
Ridge regression _λ_=1 0.566 (0.188) All
Lasso _s_=0.39 0.499 (0.161) (1,2,4,5,8)
Naïve elastic net _λ_=1, _s_=1 0.566 (0.188) All
Elastic net _λ_=1000, _s_=0.26 0.381 (0.105) (1,2,5,6,8)

If we check the correlation matrix of these eight predictors, we see that there are some medium correlations, although the highest is 0.76 (between pgg45 and gleason). We have seen that the elastic net dominates the lasso by a good margin. In other words, the lasso is hurt by the high correlation. We conjecture that, whenever ridge regression improves on OLS, the elastic net will improve the lasso. We demonstrate this point by simulations in the next section.

5. A simulation study

The purpose of this simulation is to show that the elastic net not only dominates the lasso in terms of prediction accuracy but also is a better variable selection procedure than the lasso. We simulate data from the true model

Four examples are presented here. The first three examples were used in the original lasso paper (Tibshirani, 1996), to compare the prediction performance of the lasso and ridge regression systematically. The fourth example creates a grouped variable situation.

Within each example, our simulated data consist of a training set, an independent validation set and an independent test set. Models were fitted on training data only, and the validation data were used to select the tuning parameters. We computed the test error (the mean-squared error) on the test data set. We use the notation ·/·/· to describe the number of observations in the training, validation and test set respectively, e.g. 20/20/200. Here are the details of the four scenarios.

Table 2 and Fig. 4 (box plots) summarize the prediction results. First we see that the naïve elastic net either has a very poor performance (in example 1) or behaves almost identically to either ridge regression (in examples 2 and 3) or the lasso (in example 4). In all the examples, the elastic net is significantly more accurate than the lasso, even when the lasso is doing much better than ridge regression. The reductions in the prediction error in examples 1, 2, 3 and 4 are 18%, 18%, 13% and 27% respectively. The simulation results indicate that the elastic net dominates the lasso under collinearity.

Table 2

Median mean-squared errors for the simulated examples and four methods based on 50 replications

Method Results for the following examples:
Example 1 Example 2 Example 3 Example 4
Lasso 3.06 (0.31) 3.87 (0.38) 65.0 (2.82) 46.6 (3.96)
Elastic net 2.51 (0.29) 3.16 (0.27) 56.6 (1.75) 34.5 (1.64)
Ridge regression 4.49 (0.46) 2.84 (0.27) 39.5 (1.80) 64.5 (4.78)
Naïve elastic net 5.70 (0.41) 2.73 (0.23) 41.0 (2.13) 45.9 (3.72)
Method Results for the following examples:
Example 1 Example 2 Example 3 Example 4
Lasso 3.06 (0.31) 3.87 (0.38) 65.0 (2.82) 46.6 (3.96)
Elastic net 2.51 (0.29) 3.16 (0.27) 56.6 (1.75) 34.5 (1.64)
Ridge regression 4.49 (0.46) 2.84 (0.27) 39.5 (1.80) 64.5 (4.78)
Naïve elastic net 5.70 (0.41) 2.73 (0.23) 41.0 (2.13) 45.9 (3.72)

The numbers in parentheses are the corresponding standard errors (of the medians) estimated by using the bootstrap with _B_=500 resamplings on the 50 mean-squared errors.

Table 2

Median mean-squared errors for the simulated examples and four methods based on 50 replications

Method Results for the following examples:
Example 1 Example 2 Example 3 Example 4
Lasso 3.06 (0.31) 3.87 (0.38) 65.0 (2.82) 46.6 (3.96)
Elastic net 2.51 (0.29) 3.16 (0.27) 56.6 (1.75) 34.5 (1.64)
Ridge regression 4.49 (0.46) 2.84 (0.27) 39.5 (1.80) 64.5 (4.78)
Naïve elastic net 5.70 (0.41) 2.73 (0.23) 41.0 (2.13) 45.9 (3.72)
Method Results for the following examples:
Example 1 Example 2 Example 3 Example 4
Lasso 3.06 (0.31) 3.87 (0.38) 65.0 (2.82) 46.6 (3.96)
Elastic net 2.51 (0.29) 3.16 (0.27) 56.6 (1.75) 34.5 (1.64)
Ridge regression 4.49 (0.46) 2.84 (0.27) 39.5 (1.80) 64.5 (4.78)
Naïve elastic net 5.70 (0.41) 2.73 (0.23) 41.0 (2.13) 45.9 (3.72)

The numbers in parentheses are the corresponding standard errors (of the medians) estimated by using the bootstrap with _B_=500 resamplings on the 50 mean-squared errors.

Comparing the accuracy of prediction of the lasso, the elasic net (Enet), ridge regression and the naïve elastic net (NEnet) (the elastic net outperforms the lasso in all four examples): (a) example 1; (b) example 2; (c) example 3; (d) example 4

Fig. 4

Comparing the accuracy of prediction of the lasso, the elasic net (Enet), ridge regression and the naïve elastic net (NEnet) (the elastic net outperforms the lasso in all four examples): (a) example 1; (b) example 2; (c) example 3; (d) example 4

Table 3 shows that the elastic net produces sparse solutions. The elastic net tends to select more variables than the lasso does, owing to the grouping effect. In example 4 where grouped selection is required, the elastic net behaves like the ‘oracle’. The additional ‘grouped selection’ ability makes the elastic net a better variable selection method than the lasso.

Table 3

Median number of non-zero coefficients

Method Results for the following examples:
Example 1 Example 2 Example 3 Example 4
Lasso 5 6 24 11
Elastic net 6 7 27 16
Method Results for the following examples:
Example 1 Example 2 Example 3 Example 4
Lasso 5 6 24 11
Elastic net 6 7 27 16

Table 3

Median number of non-zero coefficients

Method Results for the following examples:
Example 1 Example 2 Example 3 Example 4
Lasso 5 6 24 11
Elastic net 6 7 27 16
Method Results for the following examples:
Example 1 Example 2 Example 3 Example 4
Lasso 5 6 24 11
Elastic net 6 7 27 16

Here is an idealized example showing the important differences between the elastic net and the lasso. Let _Z_1 and _Z_2 be two independent U(0,20) variables. The response y is generated as N(_Z_1+0.1_Z_2,1). Suppose that we observe only

x1=Z1+ε1,x4=Z2+ε4,x2=−Z1+ε2,x5=−Z2+ε5,x3=Z1+ε3,x6=Z2+ε6,

where ɛ i are independent identically distributed N(0,1/16). 100 observations were generated from this model. The variables x1,x2 and x3 form a group whose underlying factor is _Z_1, and x4, x5 and x6 form a second group whose underlying factor is _Z_2. The within-group correlations are almost 1 and the between-group correlations are almost 0. An oracle would identify the _Z_1-group as the important variates. Fig. 5 compares the solution paths of the lasso and the elastic net.

(a) Lasso and (b) elastic net (λ2=0.5) solution paths: the lasso paths are unstable and (a) does not reveal any correction information by itself; in contrast, the elastic net has much smoother solution paths, while clearly showing the ‘grouped selection’—x1, x2 and x3 are in one ‘significant’ group and x4, x5 and x6 are in the other ‘trivial’ group; the decorrelation yields the grouping effect and stabilizes the lasso solution

Fig. 5

(a) Lasso and (b) elastic net (_λ_2=0.5) solution paths: the lasso paths are unstable and (a) does not reveal any correction information by itself; in contrast, the elastic net has much smoother solution paths, while clearly showing the ‘grouped selection’—x1, x2 and x3 are in one ‘significant’ group and x4, x5 and x6 are in the other ‘trivial’ group; the decorrelation yields the grouping effect and stabilizes the lasso solution

6. Microarray classification and gene selection

A typical microarray data set has thousands of genes and fewer than 100 samples. Because of the unique structure of the microarray data, we feel that a good classification method should have the following properties.

From published results in this domain, it appears that many classifiers achieve similar low classification error rates. But many of these methods do not select genes in a satisfactory way. Most of the popular classifiers fail with respect to at least one of the above properties. The lasso is good at (a) but fails both (b) and (c). The support vector machine (Guyon et al., 2002) and penalized logistic regression (Zhu and Hastie, 2004) are very successful classifiers, but they cannot do gene selection automatically and both use either univariate ranking (Golub et al., 1999) or recursive feature elimination (Guyon et al., 2002) to reduce the number of genes in the final model.

As an automatic variable selection method, the elastic net naturally overcomes the difficulty of p_≫_n and has the ability to do grouped selection. We use the leukaemia data to illustrate the elastic net classifier.

The leukaemia data consist of 7129 genes and 72 samples (Golub et al., 1999). In the training data set, there are 38 samples, among which 27 are type 1 leukaemia (acute lymphoblastic leukaemia) and 11 are type 2 leukaemia (acute myeloid leukaemia). The goal is to construct a diagnostic rule based on the expression level of those 7219 genes to predict the type of leukaemia. The remaining 34 samples are used to test the prediction accuracy of the diagnostic rule. To apply the elastic net, we first coded the type of leukaemia as a 0–1 response y. The classification function is I(fitted value>0.5), where I(·) is the indicator function. We used tenfold CV to select the tuning parameters.

We used prescreening to make the computation more manageable. Each time that a model is fitted, we first select the 1000 most ‘significant’ genes as the predictors, according to their _t_-statistic scores (Tibshirani et al., 2002). Note that this screening is done separately in each training fold in the CV. In practice, this screening does not affect the results, because we stop the elastic net path relatively early, at a stage when the screened variables are unlikely to be in the model.

All the prescreening, fitting and tuning were done using only the training set and the classification error is evaluated on the test data.

We stopped algorithm LARS-EN after 200 steps. As can be seen from Fig. 6, using the number of steps k in the algorithm as the tuning parameter, the elastic net classifier (_λ_=0.01 and _k_=82) gives a tenfold CV error of 3/38 and a test error of 0/34 with 45 genes selected. Fig. 7 displays the elastic net solution paths and the gene selection results. Table 4 compares the elastic net with several competitors including Golub's method, the support vector machine, penalized logistic regression and the nearest shrunken centroid (Tibshirani et al., 2002). The elastic net gives the best classification, and it has an internal gene selection facility.

Leukaemia classification and gene selection by the elastic net (λ=0.01): (a) the early stopping strategy at 200 steps finds the optimal classifier with much less computational cost than (b) the whole elastic net paths; with early stopping, the number of steps is much more convenient than s, the fraction of L1-norm, since computing s depends on the fit at the last step of algorithm LARS-EN; the actual values of s are not available in tenfold CV (+) if the algorithm is stopped early; on the training set, 200 steps are equivalent to s=0.50 (⋮)

Fig. 6

Leukaemia classification and gene selection by the elastic net (_λ_=0.01): (a) the early stopping strategy at 200 steps finds the optimal classifier with much less computational cost than (b) the whole elastic net paths; with early stopping, the number of steps is much more convenient than s, the fraction of _L_1-norm, since computing s depends on the fit at the last step of algorithm LARS-EN; the actual values of s are not available in tenfold CV (+) if the algorithm is stopped early; on the training set, 200 steps are equivalent to _s_=0.50 (⋮)

Leukaemia data—elastic net coefficients paths (up to k=100): the numbers on the top indicate the number of non-zero coefficients (selected genes) at each step; the optimal elastic net model is given by the fit at step 82 (⋮) with 45 selected genes; note that the size of the training set is 38, so the lasso can at most select 38 genes; in contrast, the elastic net selected more than 38 genes, not limited by the sample size; λ=0.01 is chosen by tenfold CV; if a bigger λ is used, the grouping effect will be stronger

Fig. 7

Leukaemia data—elastic net coefficients paths (up to _k_=100): the numbers on the top indicate the number of non-zero coefficients (selected genes) at each step; the optimal elastic net model is given by the fit at step 82 (⋮) with 45 selected genes; note that the size of the training set is 38, so the lasso can at most select 38 genes; in contrast, the elastic net selected more than 38 genes, not limited by the sample size; _λ_=0.01 is chosen by tenfold CV; if a bigger λ is used, the grouping effect will be stronger

Table 4

Summary of the leukaemia classification results

Method Tenfold CV error Test error Number of genes
Golub 3/38 4/34 50
Support vector maching–recursive feature elimination 2/38 1/34 31
Penalized logistic regression–recursive feature elimination 2/38 1/34 26
Nearest shrunken centroids 2/38 2/34 21
Elastic net 3/38 0/34 45
Method Tenfold CV error Test error Number of genes
Golub 3/38 4/34 50
Support vector maching–recursive feature elimination 2/38 1/34 31
Penalized logistic regression–recursive feature elimination 2/38 1/34 26
Nearest shrunken centroids 2/38 2/34 21
Elastic net 3/38 0/34 45

Table 4

Summary of the leukaemia classification results

Method Tenfold CV error Test error Number of genes
Golub 3/38 4/34 50
Support vector maching–recursive feature elimination 2/38 1/34 31
Penalized logistic regression–recursive feature elimination 2/38 1/34 26
Nearest shrunken centroids 2/38 2/34 21
Elastic net 3/38 0/34 45
Method Tenfold CV error Test error Number of genes
Golub 3/38 4/34 50
Support vector maching–recursive feature elimination 2/38 1/34 31
Penalized logistic regression–recursive feature elimination 2/38 1/34 26
Nearest shrunken centroids 2/38 2/34 21
Elastic net 3/38 0/34 45

7. Discussion

We have proposed the elastic net, a novel shrinkage and selection method. The elastic net produces a sparse model with good prediction accuracy, while encouraging a grouping effect. The empirical results and simulations demonstrate the good performance of the elastic net and its superiority over the lasso. When used as a (two-class) classification method, the elastic net appears to perform well on microarray data in terms of the misclassification error, and it does automatic gene selection.

Although our methodology is motivated by regression problems, the elastic net penalty can be used in classification problems with any consistent (Zhang, 2004) loss functions, including the _L_2-loss which we have considered here and binomial deviance. Some nice properties of the elastic net are better understood in the classification paradigm. For example, Fig. 6 is a familiar picture in boosting: the test error keeps decreasing and reaches a long flat region and then slightly increases (Hastie et al., 2001). This is no coincidence. In fact we have discovered that the elastic net penalty has a close connection with the maximum margin explanation (Rosset et al., 2004) to the success of the support vector machine and boosting. Thus Fig. 6 has a nice margin-based explanation. We have made some progress in using the elastic net penalty in classification, which will be reported in a future paper.

We view the elastic net as a generalization of the lasso, which has been shown to be a valuable tool for model fitting and feature extraction. Recently the lasso was used to explain the success of boosting: boosting performs a high dimensional lasso without explicitly using the lasso penalty (Hastie et al., 2001; Friedman et al., 2004). Our results offer other insights into the lasso, and ways to improve it.

Acknowledgements

We thank Rob Tibshirani and Ji Zhu for helpful comments, and an Associate Editor and referee for their useful comments and references. Trevor Hastie was partially supported by grant DMS-0204162 from the National Science Foundation and grant RO1-EB0011988-08 from the National Institutes of Health. Hui Zou was supported by grant DMS-0204162 from the National Science Foundation.

References

1

Breiman

,

L.

(

1996

)

Heuristics of instability and stabilization in model selection

.

Ann. Statist.

,

24

,

2350

2383

.

2

Dettling

,

M.

and

Bühlmann

,

P.

(

2004

)

Finding predictive gene groups from microarray data

.

J. Multiv. Anal.

,

90

,

106

131

.

3

Díaz-Uriarte

,

R.

(

2003

) A simple method for finding molecular signatures from gene expression data.

Tech-nical Report

.

Spanish National Cancer Center

. (Available from http://www.arxiv.org/abs/q-bio.QM/0401043.)

4

Donoho

,

D. L.

,

Johnstone

,

I. M.

,

Kerkyacharian

,

G.

and

Picard

,

D.

(

1995

)

Wavelet shrinkage: asymptopia (with discussion)?

J. R. Statist. Soc.

B,

57

,

301

369

.

5

Efron

,

B.

,

Hastie

,

T.

,

Johnstone

,

I.

and

Tibshirani

,

R.

(

2004

)

Least angle regression

.

Ann. Statist.

,

32

,

407

499

.

6

Fan

,

J.

and

Li

,

R.

(

2001

)

Variable selection via nonconcave penalized likelihood and its oracle properties

.

J. Am. Statist. Ass.

,

96

,

1348

1360

.

7

Frank

,

I.

and

Friedman

,

J.

(

1993

)

A statistical view of some chemometrics regression tools

.

Technometrics

,

35

,

109

148

.

8

Friedman

,

J.

(

1989

)

Regularized discriminant analysis

.

J. Am. Statist. Ass.

,

84

,

249

266

.

9

Friedman

,

J.

,

Hastie

,

T.

,

Rosset

,

S.

,

Tibshirani

,

R.

and

Zhu

,

J.

(

2004

)

Discussion of boosting papers

.

Ann. Statist.

,

32

,

102

107

.

10

Fu

,

W.

(

1998

)

Penalized regression: the bridge versus the lasso

.

J. Computnl Graph. Statist.

,

7

,

397

416

.

11

Golub

,

G.

and

Van Loan

,

C.

(

1983

)

Matrix Computations

. Baltimore:

Johns Hopkins University Press

.

12

Golub

,

T.

,

Slonim

,

D.

,

Tamayo

,

P.

,

Huard

,

C.

,

Gaasenbeek

,

M.

,

Mesirov

,

J.

,

Coller

,

H.

,

Loh

,

M.

,

Downing

,

J.

and

Caligiuri

,

M.

(

1999

)

Molecular classification of cancer: class discovery and class prediction by gene expression monitoring

.

Science

,

286

,

513

536

.

13

Guyon

,

I.

,

Weston

,

J.

,

Barnhill

,

S.

and

Vapnik

,

V.

(

2002

)

Gene selection for cancer classification using support vector machines

.

Mach. Learn.

,

46

,

389

422

.

14

Hastie

,

T.

,

Tibshirani

,

R.

,

Botstein

,

D.

and

Brown

,

P.

(

2003

)

Supervised harvesting of expression trees

.

Genome Biol.

,

2

,

0003.1

0003.12

.

15

Hastie

,

T.

,

Tibshirani

,

R.

,

Eisen

,

M.

,

Brown

,

P.

,

Ross

,

D.

,

Scherf

,

U.

,

Weinstein

,

J.

,

Alizadeh

,

A.

,

Staudt

,

L.

and

Botstein

,

D.

(

2000

)

‘Gene shaving’ as a method for identifying distinct sets of genes with similar expression patterns

.

Genome Biol.

,

1

,

1

21

.

16

Hastie

,

T.

,

Tibshirani

,

R.

and

Friedman

,

J.

(

2001

)

The Elements of Statistical Learning; Data Mining, Inference and Prediction

. New York:

Springer

.

17

Hoerl

,

A.

and

Kennard

,

R.

(

1988

) Ridge regression. In

Encyclopedia of Statistical Sciences

, vol. 8, pp.

129

136

. New York:

Wiley

.

18

Rosset

,

S.

,

Zhu

,

J.

and

Hastie

,

T.

(

2004

)

Boosting as a regularized path to a maximum margin classifier

.

J. Mach. Learn. Res.

,

5

,

941

973

.

19

Segal

,

M.

,

Dahlquist

,

K.

and

Conklin

,

B.

(

2003

)

Regression approach for microarray data analysis

.

J. Computnl Biol.

,

10

,

961

980

.

20

Stamey

,

T.

,

Kabalin

,

J.

,

McNeal

,

J.

,

Johnstone

,

I.

,

Freiha

,

F.

,

Redwine

,

E.

and

Yang

,

N.

(

1989

)

Prostate specific antigen in the diagnosis and treatment of adenocarcinoma of the prostate ii: radical prostatectomy treated patients

.

J. Urol.

,

16

,

1076

1083

.

21

Tibshirani

,

R.

(

1996

)

Regression shrinkage and selection via the lasso

.

J. R. Statist. Soc.

B,

58

,

267

288

.

22

Tibshirani

,

R.

,

Hastie

,

T.

,

Narasimhan

,

B.

and

Chu

,

C.

(

2002

)

Diagnosis of multiple cancer types by shrunken centroids of gene expression

.

Proc. Natn. Acad. Sci. USA

,

99

,

6567

6572

.

23

Tusher

,

V.

,

Tibshirani

,

R.

and

Chu

,

C.

(

2001

)

Significance analysis of microarrays applied to transcriptional responses to ionizing radiation

.

Proc. Natn. Acad. Sci. USA

,

98

,

5116

5121

.

24

West

,

M.

,

Blanchettem

,

C.

,

Dressman

,

H.

,

Huang

,

E.

,

Ishida

,

S.

,

Spang

,

R.

,

Zuzan

,

H.

,

Marks

,

J.

and

Nevins

,

J.

(

2001

)

Predicting the clinical status of human breast cancer using gene expression profiles

.

Proc. Natn. Acad. Sci. USA

,

98

,

11462

11467

.

25

Zhang

,

T.

(

2004

)

Statistical behavior and consistency of classification methods based on convex risk minimization

.

Ann. Statist.

,

32

,

469

475

.

26

Zhu

,

J.

and

Hastie

,

T.

(

2004

)

Classification of gene microarrays by penalized logistic regression

.

Biostatistics

,

5

,

427

444

.

Appendix A: Proofs

A.1. Proof of lemma 2

A.1.1. Part (1)

Fix _λ_>0. If β^i≠β^j⁠, let us consider β^∗ as follows:

β^k if k≠i and k≠j,12(β^i+β^j) if k=i or k=j.

Because x_i_=xj, it is obvious that Xβ^∗=Xβ^⁠; thus |y−Xβ^∗|2=|y−Xβ^|2 However, J(·) is strictly con-vex, so we have J(β^∗)<J(β^)⁠. Therefore β^ cannot be the minimizer of equation (7), which is a contradiction. So we must have β^i=β^j

A.1.2. Part (2)

If β^iβ^j<0⁠, consider the same β^∗ again. We see that |β^∗|<|β^|⁠, so β^ cannot be a lasso solution. The rest can be directly verified by the definition of the lasso, which is thus omitted.

A.2. Proof of theorem 1

If β^i(λ1,λ2)β^i(λ1,λ2)>0⁠, then both β^i(λ1,λ2) and β^j(λ1,λ2) are non-zero, and we have sgn{β^i(λ1,λ2)}=sgn{β^j(λ1,λ2)}⁠. Because of equation (4), (4),β(λ1,λ2) satisfies

∂L(λ1,λ2,β)∂βk|β=β^(λ1,λ2)=0 if β^k(λ1,λ2)≠0.

(17)

Hence we have

−2xiT{y−Xβ^(λ1,λ2)}+λ1sgn{β^i(λ1,λ2)}+2λ2β^i(λ1,λ2)=0,

(18)

−2xjT{y−Xβ^(λ1,λ2)}+λ1sgn{β^j(λ1,λ2)}+2λ2β^j(λ1,λ2)=0.

(19)

Subtracting equation (18) from equation (19) gives

(xjT−xiT){y−Xβ^(λ1,λ2)}+λ2{β^i(λ1,λ2)−β^j(λ1,λ2)}=0,

which is equivalent to

β^i(λ1,λ2)−β^j(λ1,λ2)=1λ2(xiT−xjT)r^(λ1,λ2),

(20)

where r^(λ1,λ2)=y−xβ^(λ1,λ2) is the residual vector. Since X are standardized, |xi_−xj|2=2(1−_ρ) where ρ=XiTxj By equation (4) we must have

L{λ1,λ2,β^(λ1,λ2)}⩽L(λ1,λ2,β=0),

i.e.

|r^(λ1,λ2)|2+λ2|β^(λ1,λ2)|2+λ1|β^(λ1,λ2)|1⩽|y|2.

So |r^(λ1,λ2)|⩽|y|⁠. Then equation (20) implies that

Dλ1,λ2(i,j)⩽1λ2|r^(λ1,λ2)||y||xi−xj|⩽1λ2√{2(1−ρ)}.

A.3. Proof of theorem 2

Let β^ be the elastic net estimates. By definition and equation (10) we have

β^=argminβ|y∗−X∗β√(1+λ2)|2+λ1√(1+λ2)|β√(1+λ2)|1=argminββT(X∗TX∗1+λ2)β−2y∗TX∗√(1+λ2)+y∗Ty∗+λ1|β|11+λ2.

(21)

Substituting the identities

X∗TX∗=(XTX+λ21+λ2),y∗TX∗=yTX√(1+λ2),y∗Ty∗=yTy

into equation (21), we have

β^=argminβ11+λ2{βT(XTX+λ2I1+λ2)β−2yTXβ+λ1|β|1}+yTy=argminββT(XTX+λ2I1+λ2)β−2yTXβ+λ1|β|1.

© 2005 Royal Statistical Society