`validate.subgroup.Rd`

Validates subgroup treatment effects for fitted subgroup identification model class of Chen, et al (2017)

validate.subgroup(model, B = 50L, method = c("training_test_replication", "boot_bias_correction"), train.fraction = 0.5, benefit.score.quantiles = c(0.1666667, 0.3333333, 0.5, 0.6666667, 0.8333333), parallel = FALSE)

model | fitted model object returned by |
---|---|

B | integer. number of bootstrap replications or refitting replications. |

method | validation method. |

train.fraction | fraction (between 0 and 1) of samples to be used for training in
training/test replication. Only used for |

benefit.score.quantiles | a vector of quantiles (between 0 and 1) of the benefit score values for which to return bootstrapped information about the subgroups. ie if one of the quantile values is 0.5, the median value of the benefit scores will be used as a cutoff to determine subgroups and summary statistics will be returned about these subgroups |

parallel | Should the loop over replications be parallelized? If |

An object of class `"subgroup_validated"`

Estimates of average conditional treatment effects when
subgroups are determined based on the provided cutoff value for the benefit score. For example,
if `cutoff = 0`

and there is a treatment and control only, then the treatment is
recommended if the benefit score is greater than 0.

Standard errors of the estimates from `avg.estimates`

Contains the individual results for each replication. `avg.results`

is comprised
of averages of the values from `boot.results`

Estimates of average conditional treatment effects when
subgroups are determined based on different quntile cutoff values for the benefit score. For example,
if `benefit.score.quantiles = 0.75`

and there is a treatment and control only, then the treatment is
recommended if the benefit score is greater than the 75th upper quantile of all benefit scores. If multiple quantile
values are provided, e.g. `benefit.score.quantiles = c(0.15, 0.5, 0.85)`

, then results will be provided
for all quantile levels.

Standard errors corresponding to `avg.quantile.results`

Contains the individual results for each replication. `avg.quantile.results`

is comprised
of averages of the values from `boot.results.quantiles`

Family of the outcome. For example, `"gaussian"`

for continuous outcomes

Method used for subgroup identification model. Weighting or A-learning

The number of treatment levels

All treatment levels other than the reference level

The reference level for the treatment. This should usually be the control group/level

If larger outcomes are preferred for this model

Benefit score cutoff value used for determining subgroups

Method used for validation

Number of replications used in the validation process

Estimates of various quantities conditional on subgroups and treatment statuses are provided and displayed
via the `print.subgroup_validated`

function:

"Conditional expected outcomes" The first results shown when printing a

`subgroup_validated`

object are estimates of the expected outcomes conditional on the estimated subgroups (i.e. which subgroup is 'recommended' by the model) and conditional on treatment/intervention status. If there are two total treatment options, this results in a 2x2 table of expected conditional outcomes."Treatment effects conditional on subgroups" The second results shown when printing a

`subgroup_validated`

object are estimates of the expected outcomes conditional on the estimated subgroups. If the treatment takes levels \(j \in \{1, \dots, K\}\), a total of \(K\) conditional treatment effects will be shown. For example, of the outcome is continuous, the \(j\)th conditional treatment effect is defined as \(E(Y|Trt = j, Subgroup=j) - E(Y|Trt = j, Subgroup =/= j)\), where \(Subgroup=j\) if treatment \(j\) is recommended, i.e. treatment \(j\) results in the largest/best expected potential outcomes given the fitted model."Overall treatment effect conditional on subgroups " The third quantity displayed shows the overall improvement in outcomes resulting from all treatment recommendations. This is essentially an average over all of the conditional treatment effects weighted by the proportion of the population recommended each respective treatment level.

Chen, S., Tian, L., Cai, T. and Yu, M. (2017), A general statistical framework for subgroup identification and comparative treatment scoring. Biometrics. doi:10.1111/biom.12676

Harrell, F. E., Lee, K. L., and Mark, D. B. (1996). Tutorial in biostatistics multivariable prognostic models: issues in developing models, evaluating assumptions and adequacy, and measuring and reducing errors. Statistics in medicine, 15, 361-387. doi:10.1002/(SICI)1097-0258(19960229)15:4<361::AID-SIM168>3.0.CO;2-4

`fit.subgroup`

for function which fits subgroup identification models,
`plot.subgroup_validated`

for plotting of validation results, and
`print.subgroup_validated`

for arguments for printing options for `validate.subgroup()`

.

library(personalized) set.seed(123) n.obs <- 500 n.vars <- 20 x <- matrix(rnorm(n.obs * n.vars, sd = 3), n.obs, n.vars) # simulate non-randomized treatment xbetat <- 0.5 + 0.5 * x[,11] - 0.5 * x[,13] trt.prob <- exp(xbetat) / (1 + exp(xbetat)) trt01 <- rbinom(n.obs, 1, prob = trt.prob) trt <- 2 * trt01 - 1 # simulate response delta <- 2 * (0.5 + x[,2] - x[,3] - x[,11] + x[,1] * x[,12]) xbeta <- x[,1] + x[,11] - 2 * x[,12]^2 + x[,13] xbeta <- xbeta + delta * trt # continuous outcomes y <- drop(xbeta) + rnorm(n.obs, sd = 2) # create function for fitting propensity score model prop.func <- function(x, trt) { # fit propensity score model propens.model <- cv.glmnet(y = trt, x = x, family = "binomial") pi.x <- predict(propens.model, s = "lambda.min", newx = x, type = "response")[,1] pi.x } subgrp.model <- fit.subgroup(x = x, y = y, trt = trt01, propensity.func = prop.func, loss = "sq_loss_lasso", nfolds = 5) # option for cv.glmnet subgrp.model$subgroup.trt.effects#> $subgroup.effects #> Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] #> 19.75833 #> Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] #> 18.26168 #> #> $avg.outcomes #> Recommended 0 Recommended 1 #> Received 0 -3.821617 -31.30368 #> Received 1 -23.579944 -13.04201 #> #> $sample.sizes #> Recommended 0 Recommended 1 #> Received 0 50 170 #> Received 1 159 121 #> #> $overall.subgroup.effect #> [1] 19.16871 #>x.test <- matrix(rnorm(10 * n.obs * n.vars, sd = 3), 10 * n.obs, n.vars) # simulate non-randomized treatment xbetat.test <- 0.5 + 0.5 * x.test[,11] - 0.5 * x.test[,13] trt.prob.test <- exp(xbetat.test) / (1 + exp(xbetat.test)) trt01.test <- rbinom(10 * n.obs, 1, prob = trt.prob.test) trt.test <- 2 * trt01.test - 1 # simulate response delta.test <- 2 * (0.5 + x.test[,2] - x.test[,3] - x.test[,11] + x.test[,1] * x.test[,12]) xbeta.test <- x.test[,1] + x.test[,11] - 2 * x.test[,12]^2 + x.test[,13] xbeta.test <- xbeta.test + delta.test * trt.test y.test <- drop(xbeta.test) + rnorm(10 * n.obs, sd = 2) valmod <- validate.subgroup(subgrp.model, B = 3, method = "training_test", train.fraction = 0.75) valmod#> family: gaussian #> loss: sq_loss_lasso #> method: weighting #> #> validation method: training_test_replication #> cutpoint: 0 #> replications: 3 #> #> benefit score: f(x), #> Trt recom = 1*I(f(x)>c)+0*I(f(x)<=c) where c is 'cutpoint' #> #> Average Test Set Outcomes: #> Recommended 0 #> Received 0 -11.044 (SE = 4.6483, n = 12.3333) #> Received 1 -18.7795 (SE = 2.7159, n = 36.6667) #> Recommended 1 #> Received 0 -19.6311 (SE = 6.9205, n = 42.3333) #> Received 1 -15.8673 (SE = 4.6548, n = 33.6667) #> #> Treatment effects conditional on subgroups: #> Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] #> 7.7355 (SE = 1.9336, n = 49) #> Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] #> 3.7638 (SE = 10.6273, n = 76) #> #> Est of #> E[Y|Trt received = Trt recom] - E[Y|Trt received =/= Trt recom]: #> 4.6136 (SE = 6.9873)print(valmod, which.quant = c(4, 5))#> family: gaussian #> loss: sq_loss_lasso #> method: weighting #> #> validation method: training_test_replication #> cutpoint: Quant_67 #> replications: 3 #> #> benefit score: f(x), #> Trt recom = 1*I(f(x)>c)+0*I(f(x)<=c) where c is 'cutpoint' #> #> Average Test Set Outcomes: #> Recommended 0 Recommended 1 #> Received 0 -11.1693 (SE = 4.7473, n = 28) -28.0489 (SE = 2.0818, n = 26.6667) #> Received 1 -16.8603 (SE = 2.6588, n = 56) -18.6776 (SE = 8.966, n = 14.3333) #> #> Treatment effects conditional on subgroups: #> Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] #> 5.691 (SE = 2.6139, n = 84) #> Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] #> 9.3713 (SE = 9.1035, n = 41) #> #> Est of E[Y|Trt received = Trt recom] - E[Y|Trt received =/= Trt recom]: #> 8.5319 (SE = 0.8638) #> #> <===============================================> #> #> family: gaussian #> loss: sq_loss_lasso #> method: weighting #> #> validation method: training_test_replication #> cutpoint: Quant_83 #> replications: 3 #> #> benefit score: f(x), #> Trt recom = 1*I(f(x)>c)+0*I(f(x)<=c) where c is 'cutpoint' #> #> Average Test Set Outcomes: #> Recommended 0 Recommended 1 #> Received 0 -12.1189 (SE = 3.93, n = 38) -36.5137 (SE = 7.4511, n = 16.6667) #> Received 1 -18.2239 (SE = 2.7948, n = 65) -12.6309 (SE = 17.4583, n = 5.3333) #> #> Treatment effects conditional on subgroups: #> Est of E[Y|T=0,T=Recom]-E[Y|T=/=0,T=Recom] #> 6.1051 (SE = 2.6341, n = 103) #> Est of E[Y|T=1,T=Recom]-E[Y|T=/=1,T=Recom] #> 23.8828 (SE = 23.9782, n = 22) #> #> Est of E[Y|Trt received = Trt recom] - E[Y|Trt received =/= Trt recom]: #> 10.1785 (SE = 0.4182)