This repository has been archived by the owner on Jun 14, 2023. It is now read-only.
/
NPEbounds.Rmd
107 lines (78 loc) · 7.33 KB
/
NPEbounds.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
title: "Computing Bounds Under Non-Constant Treatment Effect"
author: "Keaven Anderson"
output: rmarkdown::html_vignette
bibliography: gsDesign.bib
vignette: >
%\VignetteIndexEntry{Computing Bounds Under Non-Constant Treatment Effect}
%\VignetteEngine{knitr::rmarkdown}
%\VignetteEncoding{UTF-8}
---
```{r setup, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>"
)
```
## Overview
We consider one- and two-sided hypothesis testing using a group sequential design with possibly non-constant treatment effect.
That can be useful for situations such as an assumed non-proportional hazards model or using weighted logrank tests for a time-to-event endpoint.
Asymptotic distributional assumptions for this document have been laid out in the vignette *Non-Proportional Effect Size in Group Sequential Design.*
In general, we assume $K\ge 1$ analyses with statistical information $\mathcal{I}_k$ and information fraction $t_k=\mathcal{I}_k/\mathcal{I}_k$ at analysis $k$, $1\le k\le K$.
We denote the null hypothesis $H_{0}$: $\theta(t)=0$ and an alternate hypothesis $H_1$: $\theta(t)=\theta_1(t)$ for $t> 0$ where $t$ represents the information fraction for a study.
While a study is planned to stop at information fraction $t=1$, we define $\theta(t)$ for $t>0$ since a trial can overrun its planned statistical information at the final analysis.
As before, we use a shorthand notation in to have $\theta$ represent $\theta()$, $\theta=0$ to represent
$\theta(t)\equiv 0$ for all $t$ and $\theta_1$ to represent $\theta_i(t_k)$, the effect size at analysis $k$, $1\le k\le K$.
For our purposes, $H_0$ will represent no treatment difference, but it could represent a non-inferiority hypothesis.
Recall that we assume $K$ analyses and bounds $-\infty \le a_k< b_k<\le \infty$ for $1\le k < K$ and $-\infty \le a_K\le b_K<\infty$.
We denote the probability of crossing the upper boundary at analysis $k$ without previously crossing a bound by
$$\alpha_{k}(\theta)=P_{\theta}(\{Z_{k}\geq b_{k}\}\cap_{j=1}^{i-1}\{a_{j}\le Z_{j}< b_{j}\}),$$
$k=1,2,\ldots,K.$
The total probability of crossing an upper bound prior to crossing a lower bound is denoted by
$$\alpha(\theta)\equiv\sum_{k=1}^K\alpha_k(\theta).$$
We denote the probability of crossing a lower bound at analysis $k$ without previously crossing any bound by
$$\beta_{k}(\theta)=P_{\theta}((Z_{k}< a_{k}\}\cap_{j=1}^{k-1}\{ a_{j}\le Z_{j}< b_{j}\}).$$
Efficacy bounds $b_k$, $1\le k\le K$, for a group sequential design will be derived to control Type I at some level $\alpha=\alpha(0)$.
Lower bounds $a_k$, $1\le k\le K$ may be used to control boundary crossing probabilities under either the null hypothesis (2-sided testing), the alternate hypothesis or some other hypothesis (futility testing).
Thus, we may consider up to 3 values of $\theta(t)$:
- under the null hypothesis $\theta_0(t)=0$ for computing efficacy bounds,
- under a value $\theta_1(t)$ for computing lower bounds, and
- under a value $\theta_a(t)$ for computing sample size or power.
We refer to the information under these 3 assumptions as $\mathcal{I}^{(0)}(t)$, $\mathcal{I}^{(1)}(t)$, and $\mathcal{I}^{(a)}(t)$, respectively. Often we will assume
$\mathcal{I}(t)=\mathcal{I}^{(0)}(t)=\mathcal{I}^{(1)}(t)=\mathcal{I}^{(a)}(t).$
We note that information may differ under different values of $\theta(t)$.
For fixed designs, \cite{LachinBook} computes sample size based on different variances under the null and alternate hypothesis.
## Two-sided testing and design
We denote an alternative $H_{1}$: $\theta(t)=\theta_1(t)$; we will always assume $H_1$ for power calculations and sometimes will use $H_1$ for controlling lower boundary $a_k$ crossing probabilities.
A value of $\theta(t)>0$ will reflect a positive benefit.
We will not restrict the alternate hypothesis to $\theta_1(t)>0$ for all $t$.
The value of $\theta(t)$ will be referred to as the (standardized) treatment effect at information fraction $t$.
We assume there is interest in stopping early if there is good evidence to reject one hypothesis in favor of the other.
If $a_k= -\infty$ at analysis $k$ for some $1\le k\le K$ then the alternate hypothesis cannot be rejected at analysis $k$; i.e., there is no futility bound at analysis $k$.
For $k=1,2,\ldots,K$, the trial is stopped at analysis $k$ to reject $H_0$ if $a_j<Z_j< b_j$, $j=1,2,\dots,i-1$ and $Z_k\geq b_k$.
If the trial continues until stage $k$ without crossing a bound and $Z_k\leq a_k$ then $H_1$ is rejected in favor of $H_0$, $k=1,2,\ldots,K$.
Note that if $a_K< b_K$ there is the possibility of completing the trial without rejecting $H_0$ or $H_1$.
### Haybittle-Peto and spending bounds
The recursive algorithm of the previous section allows computation of both spending bounds and Haybittle-Peto bounds.
For a Haybittle-Peto efficacy bound, one would normally set $b_k=\Phi^{-1}(1-\epsilon)$ for $k=1,2,\ldots,K-1$ and some small $\epsilon>0$ such as $\epsilon= 0.001$ which yields $b_k=3.09$.
While the original proposal was to use $b_K=\Phi^{-1}(1-\alpha)$ at the final analysis, to fully control one-sided Type I error at level $\alpha$ we suggest computing the final bound $b_K$ using the above algorithm so that $\alpha(0)=\alpha$.
Bounds computed with spending $\alpha_k(0)$ at analysis $k$ can be computed by using equation (9) for $b_1$.
Then for $k=2,\ldots,K$ the algorithm of the previous section is used.
As noted by @JTBook, $b_1,\ldots,b_K$ if determined under the null hypothesis depend only on $t_k$ and $\alpha_k(0)$ with no dependence on $\mathcal{I}_k$, $k=1,\ldots,K$.
When computing bounds based on $\beta_k(\theta)$, $k=1,\ldots,K$, where some $\theta(t_k)\neq 0$ we have an additional dependency with $a_k$ depending not only on $t_k$ and $b_k$, $k=1,\ldots,K$, but also on the final total information $\mathcal{I}_K$.
Thus, a spending bound under something other than the null hypothesis needs to be recomputed each time $\mathcal{I}_K$ changes, whereas it only needs to be computed once when $\theta(t_k)=0$, $k=1,\ldots,K$.
### Bounds based on boundary families
Assume constants $b_1^*,\ldots,b_K^*$ and a total targeted one-sided Type I error $\alpha$.
We wish to find $C_u$ as a function of $t_1,\ldots t_K$ such that if $b_k=C_ub_k^*$ then $\alpha(0)=\alpha.$
Thus, the problem is to solve for $C_u$. If $a_k$, $k=1,2,\ldots,K$ are fixed then this is a simple root finding problem.
Since one normally normally uses non-binding efficacy bounds, it will normally be the case that $a_k=-\infty$, $k=1,\ldots,K$ for this problem.
Now we assume constants $a_k^*$ and wish to find $C_l$ such that if $a_k=C_la_k^*+\theta(t_k)\sqrt{\mathcal{I}_k}$ for $k=1,\ldots,K$ then
$\beta(\theta)=\beta$. If we use the constant upper bounds from the previous paragraph, finding $C_l$ is a simple root-finding problem.
For 2-sided symmetric bounds with $a_k=-b_k$, $k=1,\ldots,K$, we only need to solve for $C_u$ and again use simple root finding.
At this point, we do not solve for this type of bound for asymmetric upper and lower bounds.
## Sample size
For sample size, we assume $t_k$, and $\theta(t_k)$ $1,\ldots,K$ are fixed.
We assume $\beta(\theta)$ is decreasing as $\mathcal{I}$ is decreasing.
This will automatically be the case when $\theta(t_k)>0$, $k=1,\ldots,K$ and for many other cases.
Thus, the information required is done by a search for $\mathcal{I_K}$ that yields $\alpha(\theta)$ yields the targeted power.
## References