Skip to content

Commit

Permalink
Merge branch 'dev'
Browse files Browse the repository at this point in the history
  • Loading branch information
agranholm committed Aug 21, 2023
2 parents 2f8ba58 + 85e1905 commit 9fa43ee
Show file tree
Hide file tree
Showing 21 changed files with 60 additions and 35 deletions.
4 changes: 2 additions & 2 deletions DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Package: adaptr
Title: Adaptive Trial Simulator
Version: 1.3.1
Date: 2023-05-02
Version: 1.3.2
Date: 2023-08-21
Authors@R:
c(person("Anders", "Granholm",
email = "andersgran@gmail.com",
Expand Down
22 changes: 22 additions & 0 deletions NEWS.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,25 @@
# adaptr 1.3.2

This is a patch release with bug fixes and documentation updates.

* Fixed a bug in `check_performance()` that caused the proportion of
conclusive trial simulations (`prob_conclusive`) to be calculated
incorrectly when restricted to simulations ending in superiority or with a
selected arm according to the selection strategy used in `restrict`. This
bug also affected the `summary()` method for multiple simulations (as this
relies on `check_performance()`).

* Fixed a bug in `plot_convergence()` that caused arm selection probabilities
to be incorrectly calculated and plotted (this bug did not affect any of the
other functions for calculating and summarising simulation results).

* Corrections to `plot_convergence()` and `summary()` method documentation for
arm selection probability extraction.

* Fixed inconsistency between argument names and documentation in the internal
`%f|%` function (renamed arguments for consistency with the internal `%||%`
function).

# adaptr 1.3.1

This is a patch release triggered by a CRAN request to fix a failing test that
Expand Down
4 changes: 2 additions & 2 deletions R/check_performance.R
Original file line number Diff line number Diff line change
Expand Up @@ -246,7 +246,7 @@ check_performance <- function(object, select_strategy = "control if available",
summarise_num(extr_res$final_n[restrict_idx]),
summarise_num(extr_res$sum_ys[restrict_idx]),
summarise_num(extr_res$ratio_ys[restrict_idx]),
mean(extr_res$final_status != "max"),
mean(extr_res$final_status[restrict_idx] != "max"),
mean(extr_res$final_status[restrict_idx] == "superiority"),
mean(extr_res$final_status[restrict_idx] == "equivalence"),
mean(extr_res$final_status[restrict_idx] == "futility"),
Expand Down Expand Up @@ -300,7 +300,7 @@ check_performance <- function(object, select_strategy = "control if available",
summarise_num(extr_boot$final_n[restrict_idx]),
summarise_num(extr_boot$sum_ys[restrict_idx]),
summarise_num(extr_boot$ratio_ys[restrict_idx]),
mean(extr_boot$final_status != "max"),
mean(extr_boot$final_status[restrict_idx] != "max"),
mean(extr_boot$final_status[restrict_idx] == "superiority"),
mean(extr_boot$final_status[restrict_idx] == "equivalence"),
mean(extr_boot$final_status[restrict_idx] == "futility"),
Expand Down
9 changes: 5 additions & 4 deletions R/plot_convergence.R
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,10 @@
#' `ratio_ys_mean`, `ratio_ys_sd`, `ratio_ys_median`, `ratio_ys_p25`,
#' `ratio_ys_p75`, `ratio_ys_p0`, `ratio_ys_p100`, `prob_conclusive`,
#' `prob_superior`, `prob_equivalence`, `prob_futility`, `prob_max`,
#' `prob_select_*` (with `*` being an `arm` name), `rmse`, `rmse_te`, and
#' `idp`. All may be specified as above, case sensitive, but with either
#' spaces or underlines. Defaults to `"size mean"`.
#' `prob_select_*` (with `*` being either "`arm_<name>` for all `arm` names or
#' `none`), `rmse`, `rmse_te`, and `idp`. All may be specified as above,
#' case sensitive, but with either spaces or underlines. Defaults to
#' `"size mean"`.
#' @param resolution single positive integer, the number of points calculated
#' and plotted, defaults to `100` and must be `>= 10`. Higher numbers lead to
#' smoother plots, but increases computation time. If the value specified is
Expand Down Expand Up @@ -154,7 +155,7 @@ plot_convergence <- function(object, metrics = "size mean", resolution = 100,
# Get current function
if (substr(cur_metric, 1, 16) == "prob_select_arm_") {
cur_arm <- substr(cur_metric, 17, nchar(cur_metric))
cur_fun <- function(i) sum(extr_res$selected_arm[start_id:i] == cur_arm, na.rm = TRUE) / n_restrict * 100
cur_fun <- function(i) sum(extr_res$selected_arm[start_id:i] == cur_arm, na.rm = TRUE) / length(start_id:i) * 100
} else {
cur_fun <- switch(cur_metric,
size_mean = function(i) mean(extr_res$final_n[start_id:i]),
Expand Down
11 changes: 6 additions & 5 deletions R/summary.R
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,12 @@
#' `ratio_ys_sd`, `ratio_ys_median`, `ratio_ys_p25`, `ratio_ys_p75`,
#' `ratio_ys_p0`, `ratio_ys_p100`, `prob_conclusive`, `prob_superior`,
#' `prob_equivalence`, `prob_futility`, `prob_max`, `prob_select_*` (with
#' `*` being all `arm` names), `rmse`, `rmse_te`, and `idp`: performance
#' metrics as described in [check_performance()]. Note that all `sum_ys_`
#' and `ratio_ys_` measures uses outcome data from all randomised patients,
#' regardless of whether they had outcome data available at the last analysis
#' or not, as described in [extract_results()].
#' `*` being either "`arm_<name>` for all `arm` names or `none`), `rmse`,
#' `rmse_te`, and `idp`: performance metrics as described in
#' [check_performance()]. Note that all `sum_ys_` and `ratio_ys_` measures
#' use outcome data from all randomised patients, regardless of whether they
#' had outcome data available at the last analysis or not, as described in
#' [extract_results()].
#' \item `select_strategy`, `select_last_arm`, `select_preferences`,
#' `te_comp`, `raw_ests`, `final_ests`, `restrict`: as specified above.
#' \item `control`: the control arm specified by [setup_trial()],
Expand Down
6 changes: 3 additions & 3 deletions R/utils.R
Original file line number Diff line number Diff line change
Expand Up @@ -212,9 +212,9 @@ verify_int <- function(x, min_value = -Inf, max_value = Inf, open = "no") {
#'
#' @name replace_nonfinite
#'
`%f|%` <- function(x, y) {
x[!is.finite(x)] <- y
x
`%f|%` <- function(a, b) {
a[!is.finite(a)] <- b
a
}


Expand Down
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ general `setup_trial()` function, or one of the special case functions,

``` r
library(adaptr)
#> Loading 'adaptr' package v1.3.1.
#> Loading 'adaptr' package v1.3.2.
#> For instructions, type 'help("adaptr")'
#> or see https://inceptdk.github.io/adaptr/.

Expand Down Expand Up @@ -210,7 +210,7 @@ print(res_sum, digits = 1)
#> * Ideal design percentage: 100.0%
#>
#> Simulation details:
#> * Simulation time: 0.622 secs
#> * Simulation time: 0.475 secs
#> * Base random seed: 67890
#> * Credible interval width: 95%
#> * Number of posterior draws: 5000
Expand Down
3 changes: 1 addition & 2 deletions cran-comments.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
## Release summary

This is a patch release triggered by a CRAN request to fix a failing test on R
patched that also includes minor documentation updates.
This is a patch release with bug fixes and documentation updates.

## R CMD check results

Expand Down
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
7 changes: 4 additions & 3 deletions man/plot_convergence.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion man/replace_nonfinite.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

11 changes: 6 additions & 5 deletions man/summary.Rd

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

8 changes: 4 additions & 4 deletions tests/testthat/_snaps/check_performance.md
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@
20 ratio_ys_p75 0.228
21 ratio_ys_p0 0.216
22 ratio_ys_p100 0.270
23 prob_conclusive 0.800
23 prob_conclusive 1.000
24 prob_superior 1.000
25 prob_equivalence 0.000
26 prob_futility 0.000
Expand Down Expand Up @@ -236,7 +236,7 @@
20 ratio_ys_p75 0.228
21 ratio_ys_p0 0.216
22 ratio_ys_p100 0.270
23 prob_conclusive 0.800
23 prob_conclusive 1.000
24 prob_superior 1.000
25 prob_equivalence 0.000
26 prob_futility 0.000
Expand Down Expand Up @@ -280,7 +280,7 @@
20 ratio_ys_p75 0.228 0.008 0.004 0.222 0.253
21 ratio_ys_p0 0.216 NA NA NA NA
22 ratio_ys_p100 0.270 NA NA NA NA
23 prob_conclusive 0.800 0.086 0.074 0.650 0.950
23 prob_conclusive 1.000 0.000 0.000 1.000 1.000
24 prob_superior 1.000 0.000 0.000 1.000 1.000
25 prob_equivalence 0.000 0.000 0.000 0.000 0.000
26 prob_futility 0.000 0.000 0.000 0.000 0.000
Expand Down Expand Up @@ -324,7 +324,7 @@
20 ratio_ys_p75 0.228 0.008 0.004 0.222 0.253
21 ratio_ys_p0 0.216 NA NA NA NA
22 ratio_ys_p100 0.270 NA NA NA NA
23 prob_conclusive 0.800 0.086 0.074 0.650 0.950
23 prob_conclusive 1.000 0.000 0.000 1.000 1.000
24 prob_superior 1.000 0.000 0.000 1.000 1.000
25 prob_equivalence 0.000 0.000 0.000 0.000 0.000
26 prob_futility 0.000 0.000 0.000 0.000 0.000
Expand Down
4 changes: 2 additions & 2 deletions tests/testthat/_snaps/summary-print.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,7 +146,7 @@
* Ideal design percentage: 100.0%
Simulation details:
* Simulation time: 0.829 secs
* Simulation time: 0.73 secs
* Base random seed: 12345
* Credible interval width: 95%
* Number of posterior draws: 5000
Expand Down Expand Up @@ -180,7 +180,7 @@
* Ideal design percentage: 100.0%
Simulation details:
* Simulation time: 0.829 secs
* Simulation time: 0.73 secs
* Base random seed: 12345
* Credible interval width: 95%
* Number of posterior draws: 5000
Expand Down

0 comments on commit 9fa43ee

Please sign in to comment.