Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uncertainty ranges for low, md, and high values are not correctly created (at least in my example) #35

Open
billdenney opened this issue Jul 18, 2023 · 14 comments · Fixed by #38

Comments

@billdenney
Copy link

In the example below, all simulated values are reasonably good, but the simulated results are far from the expected values. My guess is that there is an error creating the bins in the simulated data.

suppressPackageStartupMessages(library(tidyvpc, quietly = TRUE))
library(nlmixr2, quietly = TRUE)

one_compartment <- function() {
  ini({
    tka <- log(1.57); label("Ka")
    tcl <- log(2.72); label("Cl")
    tv <- log(31.5); label("V")
    eta_ka ~ 0.6
    eta_cl ~ 0.3
    eta_v ~ 0.1
    add_sd <- 0.7
  })
  # and a model block with the error specification and model specification
  model({
    ka <- exp(tka + eta_ka)
    cl <- exp(tcl + eta_cl)
    v <- exp(tv + eta_v)
    d/dt(depot) <- -ka * depot
    d/dt(center) <- ka * depot - cl / v * center
    cp <- center / v
    cp ~ add(add_sd)
  })
}

data_model <- theo_sd
data_model$WTSTRATA <- ifelse(data_model$WT < median(data_model$WT), "Low weight", "High weight")

fit <- nlmixr2(one_compartment, data_model, est="saem", saemControl(print=0))
#> → loading into symengine environment...
#> → pruning branches (`if`/`else`) of saem model...
#> ✔ done
#> → finding duplicate expressions in saem model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> → optimizing duplicate expressions in saem model...
#> [====|====|====|====|====|====|====|====|====|====] 0:00:00
#> ✔ done
#> rxode2 2.0.13.9000 using 8 threads (see ?getRxThreads)
#>   no cache: create with `rxCreateCache()`
#> Calculating covariance matrix
#> → loading into symengine environment...
#> → pruning branches (`if`/`else`) of saem model...
#> ✔ done
#> → finding duplicate expressions in saem predOnly model 0...
#> → finding duplicate expressions in saem predOnly model 1...
#> → optimizing duplicate expressions in saem predOnly model 1...
#> → finding duplicate expressions in saem predOnly model 2...
#> ✔ done
#> → Calculating residuals/tables
#> ✔ done
#> → compress origData in nlmixr2 object, save 7288
#> → compress phiM in nlmixr2 object, save 64048
#> → compress parHist in nlmixr2 object, save 9760
#> → compress saem0 in nlmixr2 object, save 30704

fit_vpcsim <- vpcSim(object = fit, keep = "WTSTRATA")

vpc <-
  observed(data_model, x=TIME, y=DV) %>%
  simulated(fit_vpcsim, x=time, y=sim) %>%
  stratify(~ WTSTRATA) %>%
  binning(bin = "jenks") %>%
  vpcstats()

vpc$stats
#>        WTSTRATA         bin   xbin qname       y         lo        md        hi
#>  1: High weight    [0,0.63)  0.000 q0.05  0.0000 -0.5566503 0.2496746  1.331612
#>  2: High weight    [0,0.63)  0.000  q0.5  0.7400  3.8921545 5.3157246  6.796549
#>  3: High weight    [0,0.63)  0.000 q0.95  7.2290  7.4868713 9.0411919 10.989930
#>  4: High weight [0.63,2.13)  1.135 q0.05  6.3265 -0.6620055 0.4778264  2.732001
#>  5: High weight [0.63,2.13)  1.135  q0.5  8.0000  3.9236263 5.5316531  6.754872
#>  6: High weight [0.63,2.13)  1.135 q0.95  9.9540  7.0465457 8.7930168 11.017270
#>  7: High weight [2.13,3.82)  3.530 q0.05  5.5690 -0.6199534 1.0852235  4.470570
#>  8: High weight [2.13,3.82)  3.530  q0.5  6.8500  2.6005538 5.2621121  7.691324
#>  9: High weight [2.13,3.82)  3.530 q0.95  8.1280  5.8337147 8.2528641 10.542600
#> 10: High weight  [3.82,5.1)  5.020 q0.05  5.1590 -0.3739470 0.9844942  2.876889
#> 11: High weight  [3.82,5.1)  5.020  q0.5  6.0800  3.6105409 5.2499374  6.915303
#> 12: High weight  [3.82,5.1)  5.020 q0.95  8.0700  6.4690981 8.1877277 10.308895
#> 13: High weight  [5.1,7.17)  7.030 q0.05  4.2330 -0.6383082 0.9801091  3.565333
#> 14: High weight  [5.1,7.17)  7.030  q0.5  5.4000  2.6738969 5.2907440  7.170825
#> 15: High weight  [5.1,7.17)  7.030 q0.95  8.0930  5.7688873 7.8856561 10.509108
#> 16: High weight [7.17,9.22)  9.000 q0.05  4.1490 -0.6912736 0.9872804  4.184987
#> 17: High weight [7.17,9.22)  9.000  q0.5  4.5700  3.3223763 5.2311310  7.037905
#> 18: High weight [7.17,9.22)  9.000 q0.95  6.4220  5.9220748 8.0771765 10.445583
#> 19: High weight [9.22,12.2) 12.000 q0.05  2.8460 -0.5199907 0.9724414  3.633877
#> 20: High weight [9.22,12.2) 12.000  q0.5  3.1600  3.3775260 5.2110285  7.159039
#> 21: High weight [9.22,12.2) 12.000 q0.95  5.4150  6.1720697 8.1718801 10.498052
#> 22: High weight [12.2,24.6] 24.235 q0.05  0.9070 -0.5749832 0.9416357  3.906557
#> 23: High weight [12.2,24.6] 24.235  q0.5  1.1350  2.5640919 5.1491458  6.968943
#> 24: High weight [12.2,24.6] 24.235 q0.95  3.5530  6.5890284 8.4005849 10.502630
#> 25:  Low weight    [0,1.02)  0.250 q0.05  0.0000 -0.7821223 0.2710994  1.493815
#> 26:  Low weight    [0,1.02)  0.250  q0.5  1.2500  3.9637494 5.3436445  6.676958
#> 27:  Low weight    [0,1.02)  0.250 q0.95  7.9820  7.2208578 8.8325933 10.707706
#> 28:  Low weight [1.02,2.05)  1.990 q0.05  5.3675 -0.4897079 1.1370327  4.473624
#> 29:  Low weight [1.02,2.05)  1.990  q0.5  6.6950  3.2127944 5.3481969  7.463388
#> 30:  Low weight [1.02,2.05)  1.990 q0.95  9.6225  5.7939387 8.2028676 10.570098
#> 31:  Low weight [2.05,5.07)  3.575 q0.05  5.5125 -0.6938220 0.6628048  3.911220
#> 32:  Low weight [2.05,5.07)  3.575  q0.5  7.6950  3.1949944 5.2094577  7.385297
#> 33:  Low weight [2.05,5.07)  3.575 q0.95 10.0030  6.0166708 8.2223557 10.670607
#> 34:  Low weight [5.07,7.08)  7.020 q0.05  4.6100 -0.6466089 1.7813293  4.953796
#> 35:  Low weight [5.07,7.08)  7.020  q0.5  6.5900  1.1325489 5.3456813  7.855727
#> 36:  Low weight [5.07,7.08)  7.020 q0.95  8.2740  5.5283419 7.8334008 10.168535
#> 37:  Low weight [7.08,9.38)  9.030 q0.05  3.7740 -0.4946345 1.3240811  4.811784
#> 38:  Low weight [7.08,9.38)  9.030  q0.5  5.9000  2.6530119 5.1852825  7.431180
#> 39:  Low weight [7.08,9.38)  9.030 q0.95  7.6380  5.7928305 8.0354830 10.563015
#> 40:  Low weight [9.38,12.1) 12.050 q0.05  3.6980 -0.4840723 1.3381509  4.455000
#> 41:  Low weight [9.38,12.1) 12.050  q0.5  4.5700  2.4324854 5.3852527  7.784143
#> 42:  Low weight [9.38,12.1) 12.050 q0.95  6.8480  5.5211620 8.1486792 11.071007
#> 43:  Low weight [12.1,24.4] 24.115 q0.05  0.9325 -0.7706144 1.4873224  4.483279
#> 44:  Low weight [12.1,24.4] 24.115  q0.5  1.3700  2.1159037 5.1524565  7.671587
#> 45:  Low weight [12.1,24.4] 24.115 q0.95  2.6225  5.8568803 8.1987823 10.677722
#>        WTSTRATA         bin   xbin qname       y         lo        md        hi
plot(vpc)

Created on 2023-07-17 with reprex v2.0.2

@billdenney
Copy link
Author

A similar issue is happening with prediction corrections.

@billdenney
Copy link
Author

The issue appears to be related to these lines where the data are assumed to be sorted identically between the observed and simulated data:

tidyvpc/R/vpcstats.R

Lines 96 to 100 in 1c1be0d

x <- obs$x
nrep <- length(ysim)/nrow(obs)
repl <- rep(1:nrep, each=nrow(obs))
sim <- data.table(x, y=ysim, repl)

I had not filtered for MDV == 0 in my data. (That's on me.) Is there a way that we can help users by assisting with this filtering or otherwise helping make sure that the observed and predicted data line up?

@billdenney
Copy link
Author

I had a thought to help make sure that the data line up. We could:

  • store the x variable name from the observed data in the object,
  • allow the user to add the x variable name to the simulated data object (or require them to be the same)
  • ensure that x values are identical across replicates with something like stopifnot(all(data[[o$xobs_name]] == obs$x[repl]))

@certara-jcraig
Copy link
Collaborator

Thanks for the PR's @billdenney, I'll be reviewing/testing today and will likely be merging all. Regarding issue with ordering of data, yes, we do have requirements for observed/simulated data pre-processing and put the onus on the user to ensure data is arranged correctly and filtering of MDV == 0: https://certara.github.io/tidyvpc/#data-preprocessing

I think your solution to ensure that the data lines up is indeed viable though.

@billdenney
Copy link
Author

@certara-jcraig Thanks! Hopefully, I didn't overload you here. When I get excited about a new tool, I jump in with both feet first! 😄

@certara-jcraig
Copy link
Collaborator

@billdenney haha, no problem - we are happy to have your contributions here along the road to make tidyvpc the 'go-to' package for VPC's in R! Thanks again for your help.

@smouksassi
Copy link

please note that we can do "external vpc" with tidyvpc i.e use a previous simulation from a model and contrast with newly observed data that is why we allowed simdata to be a non replicate number of the obsdata.
also when you filter blq on obs and sim there is no guarantee that sim will end up being a multiple of the obs

@certara-jcraig
Copy link
Collaborator

thanks for the comment @smouksassi , good point - @billdenney looks like we should in fact replace the stop() with warning() as to support use case Samer mentions

@billdenney
Copy link
Author

@smouksassi and @certara-jcraig, thanks for clarifying that use case. It's an important one, but it's not clear to me how the current code would align the replicates correctly in that case. The prior code used the x-values from the observed data, unless I'm missing something. It seems like we would need a different method of assigning x-values for simulated data for that use case to work. If I'm missing something, please let me know.

@smouksassi
Copy link

I agree we should make a use case and give it more thought. for now we are binning the x from observed and forcing it to be the same on the sims (but will have to check)

@billdenney
Copy link
Author

I've been thinking about this a bit more, and I'm not certain how @smouksassi's use case functions. To be sure that we're all clear, my interpretation of the two use cases described here are:

  1. Observed data and simulations come from the same model. Observations are what was used to generate the model.
  2. Observed data was generated after the model; simulations come from a model generated previously.

The first case above works with the current tidyvpc code, and it needs to be maintained with the current functionality.

As I'm thinking more about the second use case, it's not exactly clear how the simulated data would be used for comparison without regenerating the simulations. As a simple example, if a model were built on single-dose data and the new data were multi-dose, then a multi-dose simulation would be required for the VPC to be a reasonable comparison. In that case, the current workflow would still function for comparing an old model to the current data; the VPC simulations would need to be updated with the current data.

Are you thinking of something else @smouksassi ?

@smouksassi
Copy link

hi @billdenney I gave it a second thought yes I agree
if user wants to do an external VPC we still need to vit a max iteration = 0 run to get the PRED and the design of the observed data so we know how to bin split etc.

now back to my other concern:
#22

in the original pcvpc paper discussion one of the method is to omit BLQ similarly on obs and simulated and then do pred correction that was the spirit on why we allowed pred correction with filtering of blq but as per the issue above the argument was never ported and got lost in translation :D

workaround I am doin gnow was
obsdata %>%
filter(DV > BLQ)
simdata%>%
filter(DV > BLQ)

the above there is now guaranteed tha the sim data is a multiple of obs because simulation mode can vary how many blq we will get and then we cannot use tidyvpc anymore

we can keep it as warning or reintroduce the ability to filter blq later in the pipeline

@billdenney
Copy link
Author

@smouksassi , Thanks for thinking it through with me. In that case, should we close this issue and focus the BLQ/ALQ discussion in #22?

@smouksassi
Copy link

I think we should discuss on not allowing the user to do ped corrected vpc without filtering out the blq or allow and warn either way we need to allow the filter blq ( see discussion in
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3085712/
I think the latest psn will allow and warn when user do pred correction with M3 but need to test

Ways to handle data censored due to limit of detection in VPCs have previously been suggested ([6](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3085712/#CR6),[21](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3085712/#CR21)). These approaches are not directly applicable to pcVPCs and pvcVPCs since the observations lose their rank order after prediction and variability correction. Observations censored due to limit of detection will therefore need to be handled in the same way as previously suggested for drop-out censoring of observations ([22](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3085712/#CR22)). This approach is based on omitting censored observations (e.g., BQL) in a similar manner for both the observed and simulated datasets before calculating the VPC statistics. In this case, the interpretation of the pcVPC/pvcVPC for the continuous observations will be heavily dependent on an accurate prediction of the data censoring. Therefore, diagnostics ensuring accurate prediction of censoring frequency (e.g., percentage of observation below LLOQ) versus the independent variable will be important before assessing the pcVPC/pvcVPC for the continuous observations.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2691472/

`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants