/
story-update-boundary.Rmd
295 lines (253 loc) · 9.18 KB
/
story-update-boundary.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
---
title: "Efficacy and futility boundary update"
author: "Yujie Zhao and Keaven M. Anderson"
output:
rmarkdown::html_document:
toc: true
toc_float: true
toc_depth: 2
number_sections: true
highlight: "textmate"
css: "custom.css"
code_folding: hide
bibliography: "gsDesign2.bib"
vignette: >
%\VignetteIndexEntry{Efficacy and futility boundary update}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include=FALSE}
knitr::opts_chunk$set(echo = TRUE)
```
```{r, message=FALSE, warning=FALSE}
library(gsDesign2)
library(gt)
```
# Design assumptions
We assume two analyses: an interim analysis (IA) and a final analysis (FA).
The IA is planned 20 months after opening enrollment, followed by the FA at
month 36.
The planned enrollment period spans 14 months, with the first 2 months having
an enrollment rate of 1/3 the final rate, the next 2 months with a rate of 2/3
of the final rate, and the final rate for the remaining 10 months.
To obtain the targeted 90\% power, these rates will be multiplied by a constant.
The control arm is assumed to follow an exponential distribution with a median
of 9 months and the dropout rate is 0.0001 per month regardless of treatment group.
Finally, the experimental treatment group is piecewise exponential with a
3-month delayed treatment effect; that is, in the first 3 months HR = 1 and
the HR is 0.6 thereafter.
```{r}
alpha <- 0.025
beta <- 0.1
ratio <- 1
# Enrollment
enroll_rate <- define_enroll_rate(
duration = c(2, 2, 10),
rate = (1:3) / 3
)
# Failure and dropout
fail_rate <- define_fail_rate(
duration = c(3, Inf),
fail_rate = log(2) / 9,
hr = c(1, 0.6),
dropout_rate = .0001
)
# IA and FA analysis time
analysis_time <- c(20, 36)
# Randomization ratio
ratio <- 1
```
We use the null hypothesis information for boundary crossing probability
calculations under both the null and alternate hypotheses.
This will also imply the null hypothesis information will be used for the
information fraction used in spending functions to derive the design.
```{r}
info_scale <- "h0_info"
```
# One-sided design {.tabset}
For the design, we have efficacy bounds at both the IA and FA.
We use the @lan1983discrete spending function with a total alpha of `r alpha`,
which approximates an O'Brien-Fleming bound.
```{r}
upper <- gs_spending_bound
upar <- list(sf = gsDesign::sfLDOF, total_spend = alpha, param = NULL)
x <- gs_design_ahr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
alpha = alpha,
beta = beta,
info_frac = NULL,
info_scale = "h0_info",
analysis_time = analysis_time,
ratio = ratio,
upper = gs_spending_bound,
upar = upar,
test_upper = TRUE,
lower = gs_b,
lpar = rep(-Inf, 2),
test_lower = FALSE
) |> to_integer()
```
The planned design targets:
- Planned events: `r round(x$analysis$event, 0)`
- Planned information fraction for interim and final analysis: `r round(x$analysis$info_frac, 4)`
- Planned alpha spending: `r round(gsDesign::sfLDOF(0.025, x$analysis$info_frac)$spend, 4)`
- Planned efficacy bounds: `r round(x$bound$z[x$bound$bound == "upper"], 4)`
We note that rounding up the final targeted events increases power slightly
over the targeted 90\%.
```{r}
x |>
summary() |>
as_gt() |>
tab_header(title = "Planned design")
```
## At the design stage but with different alpha
When we in the stage of study design, we may be required to report the designs under multiple $\alpha$, considering the change of the multiplicity. In the planned design, the planned $\alpha$ is 0.025. Assume the updated $\alpha$ is 0.05. The updated design is
```{r}
gs_update_ahr(
x = x,
alpha = 0.05,
ia_alpha_spending = "at_design_stage",
fa_alpha_spending = "at_design_stage"
) |>
summary(
col_vars = c(
"analysis", "bound", "z", "~hr at bound",
"nominal p", "Alternate hypothesis", "Null hypothesis"
),
col_decimals = c(NA, NA, 4, 4, 4, 4, 4)
) |>
as_gt(title = "Updated design",
subtitle = "With updated alpha of 0.05")
```
The above updated boundaries utilize the planned treatment effect and the planned statistical information under null hypothesis, considering the original design has `info_scale = "h0_info"`.
## At the analysis stage but with planned events differ from observed events
We provide a simulation below where 188 and 295 events observed at the IA and FA, respectively.
We will assume the differences from planned (193, 297) are due to logistical considerations.
We also assume the protocol specifies that the full $\alpha$ will be spent at
the final analysis even in a case like this where there is a shortfall of events
versus the design plan.
The observed data for this example is generated by `simtrial::sim_pw_surv()`.
```{r}
set.seed(123)
observed_data <- simtrial::sim_pw_surv(
n = x$analysis$n[x$analysis$analysis == 2],
stratum = data.frame(stratum = "All", p = 1),
block = c(rep("control", 2), rep("experimental", 2)),
enroll_rate = x$enroll_rate,
fail_rate = (fail_rate |> simtrial::to_sim_pw_surv())$fail_rate,
dropout_rate = (fail_rate |> simtrial::to_sim_pw_surv())$dropout_rate
)
observed_data_ia <- observed_data |> simtrial::cut_data_by_date(analysis_time[1])
observed_data_fa <- observed_data |> simtrial::cut_data_by_date(analysis_time[2])
```
The updated design is
```{r}
gs_update_ahr(
x = x,
ia_alpha_spending = "actual_info_frac",
fa_alpha_spending = "full_alpha",
observed_data = list(observed_data_ia, observed_data_fa)
) |>
summary(
col_vars = c(
"analysis", "bound", "z", "~hr at bound",
"nominal p", "Alternate hypothesis", "Null hypothesis"
),
col_decimals = c(NA, NA, 4, 4, 4, 4, 4)
) |>
as_gt(title = "Updated design",
subtitle = paste0("With observed ", sum(observed_data_ia$event),
" events at IA and ", sum(observed_data_fa$event),
" events at FA"))
```
# Two-sided asymmetric design, beta-spending with non-binding lower bound {.tabset}
In this section, we investigate a 2 sided asymmetric design, with the
non-binding beta-spending futility bounds. Beta-spending refers to
error spending for the lower bound crossing probabilities under the
alternative hypothesis. Non-binding assumes the trial continues if the
lower bound is crossed for Type I, but not Type II error computation.
In the original designs, we employ the Lan-DeMets spending function used to
approximate O'Brien-Fleming bounds [@lan1983discrete] for both efficacy and
futility bounds.
The total spending for efficacy is `r alpha`, and for futility is `r beta`.
Besides, we assume the futility test only happens at IA.
```{r}
upper <- gs_spending_bound
upar <- list(sf = gsDesign::sfLDOF, total_spend = alpha, param = NULL)
lower <- gs_spending_bound
lpar <- list(sf = gsDesign::sfLDOF, total_spend = beta, param = NULL)
x <- gs_design_ahr(
enroll_rate = enroll_rate,
fail_rate = fail_rate,
alpha = alpha,
beta = beta,
info_frac = NULL,
info_scale = "h0_info",
analysis_time = c(20, 36),
ratio = ratio,
upper = gs_spending_bound,
upar = upar,
test_upper = TRUE,
lower = lower,
lpar = lpar,
test_lower = c(TRUE, FALSE),
binding = FALSE
) |> to_integer()
```
In the planned design, we have
- Planned events: `r round(x$analysis$event, 0)`
- Planned information fraction (timing): `r round(x$analysis$info_frac, 4)`
- Planned alpha spending: `r gsDesign::sfLDOF(0.025, x$analysis$info_frac)$spend`
- Planned efficacy bounds: `r round(x$bound$z[x$bound$bound == "upper"], 4)`
- Planned futility bounds: `r round(x$bound$z[x$bound$bound == "lower"], 4)`
Since we added futility bounds, the sample size and number of events are
larger than what we have in the 1-sided example.
```{r}
x |>
summary() |>
as_gt() |>
tab_header(title = "Planned design")
```
## At the design stage but with different alpha
When we in the stage of study design, we may be required to report the designs under multiple $\alpha$, considering the change of the multiplicity. In the planned design, the planned $\alpha$ is 0.025. Assume the updated $\alpha$ is 0.05. The updated design is
```{r}
gs_update_ahr(
x = x,
alpha = 0.05,
ia_alpha_spending = "at_design_stage",
fa_alpha_spending = "at_design_stage"
) |>
summary(
col_vars = c(
"analysis", "bound", "z", "~hr at bound",
"nominal p", "Alternate hypothesis", "Null hypothesis"
),
col_decimals = c(NA, NA, 4, 4, 4, 4, 4)
) |>
as_gt(title = "Updated design",
subtitle = "With updated alpha of 0.05")
```
## At the analysis stage but with planned events differ from observed events
For simplicity in presentation, we assume the observed events are the same as that in the 1-sided design.
The updated design is
```{r}
gs_update_ahr(
x = x,
ia_alpha_spending = "actual_info_frac",
fa_alpha_spending = "full_alpha",
observed_data = list(observed_data_ia, observed_data_fa)
) |>
summary(
col_vars = c(
"analysis", "bound", "z", "~hr at bound",
"nominal p", "Alternate hypothesis", "Null hypothesis"
),
col_decimals = c(NA, NA, 4, 4, 4, 4, 4)
) |>
as_gt(title = "Updated design",
subtitle = paste0("With observed ", sum(observed_data_ia$event),
" events at IA and ", sum(observed_data_fa$event),
" events at FA"))
```
# References