Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
1 contributor

Users who have contributed to this file

457 lines (358 sloc) 16.5 KB
---
title: "An Introduction to hdnom"
author: "Nan Xiao <<https://nanx.me>> <br> Miaozhu Li <<http://miaozhu.li>>"
date: "`r Sys.Date()`"
output:
rmarkdown::html_document:
toc: true
toc_float: true
toc_depth: 4
number_sections: true
theme: "flatly"
highlight: "textmate"
vignette: >
%\VignetteEngine{knitr::rmarkdown}
%\VignetteIndexEntry{An Introduction to hdnom}
---
# Introduction
It is a challenging task to model the emerging high-dimensional clinical
data with survival outcomes.
For its simplicity and efficiency, penalized Cox models are significantly
useful for accomplishing such tasks.
`hdnom` streamlines the workflow of high-dimensional Cox model building,
nomogram plotting, model validation, calibration, and comparison.
# Build Survival Models
To build a penalized Cox model with good predictive performance,
some parameter tuning is usually needed.
For example, the elastic-net model requires to tune the $\ell_1$-$\ell_2$
penalty trade-off parameter $\alpha$, and the regularization parameter
$\lambda$.
To free the users from the tedious and error-prone parameter tuning process,
`hdnom` provides several functions for automatic parameter tuning and
model selection, including the following model types:
+------------------+---------------------------+-------------------------------+
| Function name | Model type | Auto-tuned hyperparameters |
+==================+===========================+===============================+
| `fit_lasso()` | Lasso | $\lambda$ |
+------------------+---------------------------+-------------------------------+
| `fit_alasso()` | Adaptive lasso | $\lambda$ |
+------------------+---------------------------+-------------------------------+
| `fit_enet()` | Elastic-net | $\lambda$, $\alpha$ |
+------------------+---------------------------+-------------------------------+
| `fit_aenet()` | Adaptive elastic-net | $\lambda$, $\alpha$ |
+------------------+---------------------------+-------------------------------+
| `fit_mcp()` | MCP | $\gamma$, $\lambda$ |
+------------------+---------------------------+-------------------------------+
| `fit_mnet()` | Mnet (MCP + $\ell_2$) | $\gamma$, $\lambda$, $\alpha$ |
+------------------+---------------------------+-------------------------------+
| `fit_scad()` | SCAD | $\gamma$, $\lambda$ |
+------------------+---------------------------+-------------------------------+
| `fit_snet()` | Snet (SCAD + $\ell_2$) | $\gamma$, $\lambda$, $\alpha$ |
+------------------+---------------------------+-------------------------------+
| `fit_flasso()` | Fused lasso | $\lambda_1$, $\lambda_2$ |
+------------------+---------------------------+-------------------------------+
In the next, we will use the imputed SMART study data
to demonstrate a complete process of model building, nomogram plotting,
model validation, calibration, and comparison with `hdnom`.
Load the packages and the `smart` dataset:
```{r}
library("hdnom")
```
```{r}
data("smart")
x <- as.matrix(smart[, -c(1, 2)])
time <- smart$TEVENT
event <- smart$EVENT
y <- survival::Surv(time, event)
```
The dataset contains 3873 observations with corresponding survival outcome
(`time`, `event`). 27 clinical variables (`x`) are available as the predictors.
See `?smart` for a detailed explanation of the variables.
Fit a penalized Cox model by adaptive elastic-net regularization with `fit_aenet()` and enable the parallel parameter tuning:
```{r, eval = FALSE}
suppressMessages(library("doParallel"))
registerDoParallel(detectCores())
fit <- fit_aenet(x, y, nfolds = 10, rule = "lambda.1se", seed = c(5, 7), parallel = TRUE)
names(fit)
```
```
## [1] "model" "alpha" "lambda" "model_init" "alpha_init"
## [6] "lambda_init" "pen_factor" "type" "seed" "call"
```
```{r, echo = FALSE}
fit <- readRDS("fit.rds")
```
Adaptive elastic-net includes two estimation steps. The random seed used
for parameter tuning, the selected best $\alpha$, the selected best $\lambda$,
the model fitted for each estimation step, and the penalty factor for the model
coefficients in the second estimation step are all stored in the model object
`fit`.
# Nomogram Visualization
Before plotting the nomogram, we need to extract some necessary information
about the model: the model object and the selected hyperparameters:
```{r}
model <- fit$model
alpha <- fit$alpha
lambda <- fit$lambda
adapen <- fit$pen_factor
```
Let's generate a nomogram object with `as_nomogram()` and plot it:
```{r, fig.width = 8, fig.height = 8, out.width = 600, out.height = 600}
nom <- as_nomogram(
fit, x, time, event,
pred.at = 365 * 2,
funlabel = "2-Year Overall Survival Probability"
)
plot(nom)
```
According to the nomogram, the adaptive elastic-net model selected 6 variables
from the original set of 27 variables, effectively reduced the model complexity.
Information about the nomogram itself, such as the point-linear predictor
unit mapping and total points-survival probability mapping, can be viewed
by printing the `nom` object directly.
# Model Validation
It is a common practice to utilize resampling-based methods to validate the
predictive performance of a penalized Cox model.
Bootstrap, $k$-fold cross-validation, and repeated $k$-fold cross-validation
are the most employed methods for such purpose.
`hdnom` supports both internal model validation and external model validation.
Internal validation takes the dataset used to build the model and
evaluates the predictive performance on the data internally
with the above resampling-based methods, while external validation
evaluates the model's predictive performance on a dataset which is
independent to the dataset used in model building.
## Internal Validation
`validate()` allows us to assess the model performance internally by
time-dependent AUC (Area Under the ROC Curve) with the above three
resampling methods.
Here, we validate the performance of the adaptive elastic-net model
with bootstrap resampling, at every half year from the first year
to the fifth year:
```{r}
val_int <- validate(
x, time, event,
model.type = "aenet",
alpha = alpha, lambda = lambda, pen.factor = adapen,
method = "bootstrap", boot.times = 10,
tauc.type = "UNO", tauc.time = seq(1, 5, 0.5) * 365,
seed = 42, trace = FALSE
)
print(val_int)
summary(val_int)
```
The mean, median, 25%, and 75% quantiles of time-dependent AUC at each
time point across all bootstrap predictions are listed above.
The median and the mean can be considered as the bias-corrected estimation
of the model performance.
It is also possible to plot the model validation result:
```{r, fig.width = 8, fig.height = 8, out.width = 500, out.height = 500}
plot(val_int)
```
The solid line represents the mean of the AUC, the dashed line represents
the median of the AUC. The darker interval in the plot shows
the 25% and 75% quantiles of AUC, the lighter interval shows
the minimum and maximum of AUC.
It seems that the bootstrap-based validation result is stable:
the median and the mean value at each evaluation time point are close;
the 25% and 75% quantiles are also close to the median at each time point.
Bootstrap-based validation often gives relatively stable results.
Many of the established nomograms in clinical oncology research are
validated by bootstrap methods. $K$-fold cross-validation provides a more
strict evaluation scheme than bootstrap. Repeated cross-validation gives
similar results as $k$-fold cross-validation, and usually more robust.
These two methods are more applied by the machine learning community.
Check `?hdnom::validate` for more examples about internal model validation.
## External Validation
Now we have the internally validated model. To perform external validation,
we usually need an independent dataset (preferably, collected in other studies),
which has the same variables as the dataset used to build the model.
For penalized Cox models, the external dataset should have at least
the same variables that have been selected in the model.
For demonstration purposes, here we draw 1000 samples from the `smart` data
and _assume_ that they form an external validation dataset, then
use `validate_external()` to perform external validation:
```{r, fig.width = 8, fig.height = 8, out.width = 500, out.height = 500}
x_new <- as.matrix(smart[, -c(1, 2)])[1001:2000, ]
time_new <- smart$TEVENT[1001:2000]
event_new <- smart$EVENT[1001:2000]
val_ext <- validate_external(
fit, x, time, event,
x_new, time_new, event_new,
tauc.type = "UNO",
tauc.time = seq(0.25, 2, 0.25) * 365
)
print(val_ext)
summary(val_ext)
plot(val_ext)
```
The time-dependent AUC on the external dataset is shown above.
# Model Calibration
Measuring how far the model predictions are from actual survival outcomes
is known as _calibration_. Calibration can be assessed by plotting the
predicted probabilities from the model versus actual survival probabilities.
Similar to model validation, both internal model calibration and
external model calibration are supported in `hdnom`.
## Internal Calibration
`calibrate()` provides non-resampling and resampling
methods for internal model calibration, including direct fitting,
bootstrap resampling, $k$-fold cross-validation, and repeated cross-validation.
For example, to calibrate the model internally with the bootstrap method:
```{r}
cal_int <- calibrate(
x, time, event,
model.type = "aenet",
alpha = alpha, lambda = lambda, pen.factor = adapen,
method = "bootstrap", boot.times = 10,
pred.at = 365 * 5, ngroup = 3,
seed = 42, trace = FALSE
)
print(cal_int)
summary(cal_int)
```
We split the samples into three risk groups. In practice, the number of
risk groups is decided by the users according to their needs.
The model calibration results (the median of the predicted survival probability;
the median of the observed survival probability estimated by Kaplan-Meier
method with 95% CI) are summarized as above.
Plot the calibration result:
```{r, fig.width = 8, fig.height = 8, out.width = 500, out.height = 500}
plot(cal_int, xlim = c(0.5, 1), ylim = c(0.5, 1))
```
In practice, you may want to perform calibration for multiple time points
separately, and put the plots together in one figure.
See `?hdnom::calibrate` for more examples about internal model calibration.
## External Calibration
To perform external calibration with an external dataset, use `calibrate_external()`:
```{r, fig.width = 8, fig.height = 8, out.width = 500, out.height = 500}
cal_ext <- calibrate_external(
fit, x, time, event,
x_new, time_new, event_new,
pred.at = 365 * 5, ngroup = 3
)
print(cal_ext)
summary(cal_ext)
plot(cal_ext, xlim = c(0.5, 1), ylim = c(0.5, 1))
```
The external calibration results have the similar interpretations as the
internal calibration results, except the fact that external calibration
is performed on the external dataset.
## Kaplan-Meier Analysis for Risk Groups
Internal calibration and external calibration both classify the testing set
into different risk groups. For internal calibration, the testing set means
all the samples in the dataset that was used to build the model, for external
calibration, the testing set means the samples from the external dataset.
We can further analyze the differences in survival time for different risk
groups with Kaplan-Meier survival curves and a number at risk table.
For example, here we plot the Kaplan-Meier survival curves and evaluate
the number at risk from one year to six years for the three risk groups,
with the function `kmplot()`:
```{r, fig.width = 9, fig.height = 6, out.width = 600, out.height = 400}
kmplot(
cal_int,
group.name = c("High risk", "Medium risk", "Low risk"),
time.at = 1:6 * 365
)
kmplot(
cal_ext,
group.name = c("High risk", "Medium risk", "Low risk"),
time.at = 1:6 * 365
)
```
The $p$-value of the log-rank test is also shown in the plot.
## Log-Rank Test for Risk Groups
To compare the differences between the survival curves, log-rank test
is often applied. `logrank_test()` performs such tests on the internal
calibration and external calibration results:
```{r}
cal_int_logrank <- logrank_test(cal_int)
cal_int_logrank
cal_int_logrank$pval
cal_ext_logrank <- logrank_test(cal_ext)
cal_ext_logrank
cal_ext_logrank$pval
```
The exact $p$-values for log-rank tests are stored as `cal_int_logrank$pval`
and `cal_ext_logrank$pval`. Here $p < 0.001$ indicates significant differences
between the survival curves for different risk groups.
# Model Comparison
Given all the available model types, it is a natural question to ask: which type
of model performs the best for my data? Such questions about model type
selection can be answered by built-in model comparison functions in `hdnom`.
## Model Comparison by Validation
We can compare the model performance using time-dependent AUC by
the same (internal) model validation approach as before.
For example, here we compare lasso and adaptive lasso by
5-fold cross-validation:
```{r, fig.width = 8, fig.height = 6.4, out.width = 600, out.height = 480}
cmp_val <- compare_by_validate(
x, time, event,
model.type = c("lasso", "alasso"),
method = "cv", nfolds = 5, tauc.type = "UNO",
tauc.time = seq(0.25, 2, 0.25) * 365,
seed = 42, trace = FALSE
)
print(cmp_val)
summary(cmp_val)
plot(cmp_val)
plot(cmp_val, interval = TRUE)
```
The solid line, dashed line and intervals have the same interpretation
as above. For this comparison, there seems to be no substantial difference
(AUC difference $< 5\%$) between lasso and adaptive lasso in predictive
performance, although lasso performs slightly better than adaptive lasso
for the first three time points, adaptive lasso performs slightly better
than lasso for the last few time points.
The model comparison functions in `hdnom` have a minimal input
design so you do not have to set the parameters for each model type manually.
The functions will try to determine the best parameter settings
automatically for each model type to achieve the best performance.
## Model Comparison by Calibration
We can compare the models by comparing their (internal) model calibration
performance. To continue the example, we split the samples into five risk
groups, and compare lasso to adaptive lasso via calibration:
```{r, fig.width = 8, fig.height = 6.4, out.width = 600, out.height = 480}
cmp_cal <- compare_by_calibrate(
x, time, event,
model.type = c("lasso", "alasso"),
method = "cv", nfolds = 5,
pred.at = 365 * 9, ngroup = 5,
seed = 42, trace = FALSE
)
print(cmp_cal)
summary(cmp_cal)
plot(cmp_cal, xlim = c(0.3, 1), ylim = c(0.3, 1))
```
The summary output and the plot show the calibration results for each model
type we want to compare. Lasso and adaptive lasso have comparable performance
in this case, since their predicted overall survival probabilities are
both close to the observed survival probabilities in a similar degree.
Adaptive lasso seems to be slightly more stable than lasso in calibration.
# Prediction on New Data
To predict the overall survival probability on certain time points for
new samples with the established models, simply use `predict()` on
the model objects and the new data.
As an example, we will use the samples numbered from 101 to 105 in
the `smart` dataset as the new samples, and predict their overall
survival probability from one year to ten years:
```{r}
predict(fit, x, y, newx = x[101:105, ], pred.at = 1:10 * 365)
```
# Customize Color Palette
The `hdnom` package has 4 unique built-in color palettes available for
all above plots, inspired by the colors commonly used by scientific journals.
Users can use the `col.pal` argument to select the color palette.
Possible values for this argument are listed in the table below:
+------------+-------------------------------------------------+
| Value | Color palette inspiration |
+============+=================================================+
| `"JCO"` | _Journal of Clinical Oncology_ |
+------------+-------------------------------------------------+
| `"Lancet"` | Lancet journals, such as _Lancet Oncology_ |
+------------+-------------------------------------------------+
| `"NPG"` | NPG journals, such as _Nature Reviews Cancer_ |
+------------+-------------------------------------------------+
| `"AAAS"` | AAAS Journals, such as _Science_ |
+------------+-------------------------------------------------+
By default, `hdnom` will use the JCO color palette (`col.pal = "JCO"`).
# Shiny app
[http://hdnom.io](http://hdnom.io)
You can’t perform that action at this time.