Orthogonalizing EM with precomputed XtX

oem.xtx(
  xtx,
  xty,
  family = c("gaussian", "binomial"),
  penalty = c("elastic.net", "lasso", "ols", "mcp", "scad", "mcp.net", "scad.net",
    "grp.lasso", "grp.lasso.net", "grp.mcp", "grp.scad", "grp.mcp.net", "grp.scad.net",
    "sparse.grp.lasso"),
  lambda = numeric(0),
  nlambda = 100L,
  lambda.min.ratio = NULL,
  alpha = 1,
  gamma = 3,
  tau = 0.5,
  groups = numeric(0),
  scale.factor = numeric(0),
  penalty.factor = NULL,
  group.weights = NULL,
  maxit = 500L,
  tol = 1e-07,
  irls.maxit = 100L,
  irls.tol = 0.001
)

Arguments

xtx

input matrix equal to crossprod(x) / nrow(x). where x is the design matrix. It is highly recommended to scale by the number of rows in x. If xtx is scaled, xty must also be scaled or else results may be meaningless!

xty

numeric vector of length nvars. Equal to crosprod(x, y) / nobs. It is highly recommended to scale by the number of rows in x.

family

"gaussian" for least squares problems, "binomial" for binary response. (only gaussian implemented currently)

penalty

Specification of penalty type. Choices include:

  • "elastic.net" - elastic net penalty, extra parameters: "alpha"

  • "lasso" - lasso penalty

  • "ols" - ordinary least squares

  • "mcp" - minimax concave penalty, extra parameters: "gamma"

  • "scad" - smoothly clipped absolute deviation, extra parameters: "gamma"

  • "mcp.net" - minimax concave penalty + l2 penalty, extra parameters: "gamma", "alpha"

  • "scad.net" - smoothly clipped absolute deviation + l2 penalty, extra parameters: "gamma", "alpha"

  • "grp.lasso" - group lasso penalty

  • "grp.lasso.net" - group lasso penalty + l2 penalty, extra parameters: "alpha"

  • "grp.mcp" - group minimax concave penalty, extra parameters: "gamma"

  • "grp.scad" - group smoothly clipped absolute deviation, extra parameters: "gamma"

  • "grp.mcp.net" - group minimax concave penalty + l2 penalty, extra parameters: "gamma", "alpha"

  • "grp.scad.net" - group smoothly clipped absolute deviation + l2 penalty, extra parameters: "gamma", "alpha"

  • "sparse.grp.lasso" - sparse group lasso penalty (group lasso + lasso), extra parameters: "tau"

Careful consideration is required for the group lasso, group MCP, and group SCAD penalties. Groups as specified by the groups argument should be chosen in a sensible manner.

lambda

A user supplied lambda sequence. By default, the program computes its own lambda sequence based on nlambda and lambda.min.ratio. Supplying a value of lambda overrides this.

nlambda

The number of lambda values - default is 100.

lambda.min.ratio

Smallest value for lambda, as a fraction of lambda.max, the (data derived) entry value (i.e. the smallest value for which all coefficients are zero). The default depends on the sample size nobs relative to the number of variables nvars. The default is 0.0001

alpha

mixing value for elastic.net, mcp.net, scad.net, grp.mcp.net, grp.scad.net. penalty applied is (1 - alpha) * (ridge penalty) + alpha * (lasso/mcp/mcp/grp.lasso penalty)

gamma

tuning parameter for SCAD and MCP penalties. must be >= 1

tau

mixing value for sparse.grp.lasso. penalty applied is (1 - tau) * (group lasso penalty) + tau * (lasso penalty)

groups

A vector of describing the grouping of the coefficients. See the example below. All unpenalized variables should be put in group 0

scale.factor

of length nvars === ncol(xtx) == length(xty) for scaling columns of x. The standard deviation for each column of x is a common choice for scale.factor. Coefficients will be returned on original scale. Default is no scaling.

penalty.factor

Separate penalty factors can be applied to each coefficient. This is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some variables, which implies no shrinkage, and that variable is always included in the model. Default is 1 for all variables.

group.weights

penalty factors applied to each group for the group lasso. Similar to penalty.factor, this is a number that multiplies lambda to allow differential shrinkage. Can be 0 for some groups, which implies no shrinkage, and that group is always included in the model. Default is sqrt(group size) for all groups.

maxit

integer. Maximum number of OEM iterations

tol

convergence tolerance for OEM iterations

irls.maxit

integer. Maximum number of IRLS iterations

irls.tol

convergence tolerance for IRLS iterations. Only used if family != "gaussian"

Value

An object with S3 class "oem"

Examples

set.seed(123) n.obs <- 1e4 n.vars <- 100 true.beta <- c(runif(15, -0.25, 0.25), rep(0, n.vars - 15)) x <- matrix(rnorm(n.obs * n.vars), n.obs, n.vars) y <- rnorm(n.obs, sd = 3) + x %*% true.beta fit <- oem(x = x, y = y, penalty = c("lasso", "elastic.net", "ols", "mcp", "scad", "mcp.net", "scad.net", "grp.lasso", "grp.lasso.net", "grp.mcp", "grp.scad", "sparse.grp.lasso"), standardize = FALSE, intercept = FALSE, groups = rep(1:20, each = 5)) xtx <- crossprod(x) / n.obs xty <- crossprod(x, y) / n.obs fit.xtx <- oem.xtx(xtx = xtx, xty = xty, penalty = c("lasso", "elastic.net", "ols", "mcp", "scad", "mcp.net", "scad.net", "grp.lasso", "grp.lasso.net", "grp.mcp", "grp.scad", "sparse.grp.lasso"), groups = rep(1:20, each = 5)) max(abs(fit$beta[[1]][-1,] - fit.xtx$beta[[1]]))
#> [1] 8.788848e-16
max(abs(fit$beta[[2]][-1,] - fit.xtx$beta[[2]]))
#> [1] 8.788848e-16
layout(matrix(1:2, ncol = 2)) plot(fit.xtx) plot(fit.xtx, which.model = 2)