Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

calibration to hospital occupancy and waste in calibration_ww.R not working as expected #94

Open
mayaearn opened this issue Jun 21, 2023 · 19 comments
Labels
calibration Component: Model calibration tools

Comments

@mayaearn
Copy link
Collaborator

Calibration to hospital occupancy and wastewater data seems to be volatile. Slight changes in breakpoints (slight time difference or adding / removing breakpoints) causes large changes in calibration or failure to fit at all.

@bbolker
Copy link
Collaborator

bbolker commented Jun 25, 2023

Can you post a link to a reproducible example here please ... ?

@mayaearn
Copy link
Collaborator Author

I have just pushed misc/experiments/calibration/calibration_ww.R, that if sourced should give a very good hospital occupancy fit. If you uncomment lines 213 and 214, you should be calibrating to both hospital occupancy and waste, which gives a very bad fit.

@stevencarlislewalker
Copy link
Member

Thank you @mayaearn this is very helpful.

@stevencarlislewalker stevencarlislewalker added the calibration Component: Model calibration tools label Jun 26, 2023
@stevencarlislewalker
Copy link
Member

I do not have time to dig into this at the moment, but when i do have time this is what I will do.

  1. Confirm that there are no (completely understandable given the state of the documentation) technical issues with the script.
  2. Hook up radial basis function time-variation
  3. Add what you might call component-wise hazard corrections for key state variables

@mayaearn
Copy link
Collaborator Author

Sounds great! Thank you! Just as something to look out for, I realized recently that the Champredon paper was not splitting hospital occupancy into the 4 compartments that macpan does, so I combined the compartments H, H2, ICUd, and ICUs for the calibration to hospital occupancy. This is a change from what I was doing previously and I wanted to mention it to avoid any confusion (also this could be a possible location of new technical issues).

@bbolker
Copy link
Collaborator

bbolker commented Jun 26, 2023

I have a couple of questions.

@mayaearn: the suggested lines to uncomment (213-214) use a Poisson log-likelihood for wastewater (although there are other commented lines lying around that use log-Normal instead): was that intentional? Seems to me that could be problematic/add unrelated difficulties into the mix.

@stevencarlislewalker: it seems a little odd to me that we create complex $nlminb() and $optim() functions to do the optimization. It's not 100% clear to me what the design goal of doing things that way is (among other things, it means we need to create a new method every time someone wants to try out a different optimization function (e.g. DEoptim, nloptr, etc.) ... ?

@mayaearn
Copy link
Collaborator Author

@bbolker Yes, this was intentional. The log-normal and poisson seemed to be giving very similar fits for just wastewater, so I assumed that using the poisson log-likelihood, which Steve had improved with various clamping fixes was the better option. I will look into whether switching back to log-normal improves anything right now (given I've changed a few parts of the calibration since making this switch), and get back to you!

@stevencarlislewalker
Copy link
Member

@bbolker, one may always use $ad_fun() and use whatever optimization function that they want.

The idea of the other ones is for developers of higher level interfaces to be able to swap out optimizers without needing to change interface code. So for example, nlminb and optim have different names for (at least some of) the initial parameters, objective function, gradient, and hessian arguments but $nlminb() and $optim() handle these arguments for you. Other arguments currently have hooks that allows the user to treated these arguments as they would using the parent optimization function. But at some point I'd like to write other methods for specifying common but differently treated arguments (e.g. maximum number of iterations).

I have three goals:

  1. Normalize the differences between the optimization functions
  2. Leave the unique options of each user-exposed
  3. Make it easy for anyone who knows how to hook a TMB AD function up to an optimization function

Did you have a particular design in mind? I'm certainly open to adjusting the approach as long as we hit these two goals.

I have a preference to avoid writing an optimization function that takes many arguments that interact, but then again I have a stronger preference to make anyone who is interested in the project happy.

@mayaearn
Copy link
Collaborator Author

I have just pushed a new version of calibration_ww.R (and opt_parameters.csv) so that log-normal can be used for waste calibration. This does work somewhat better for waste calibration specifically, but the fit to both hospital occupancy and waste (by uncommenting lines 203 to 208) is still very bad.

@bbolker
Copy link
Collaborator

bbolker commented Jun 26, 2023

FWIW I took a brief crack at using DEoptim (differential evolution) and optim(..., method = "SANN") (simulated annealing) for this, but so far without much luck (DEoptim doesn't like NaN return values, so gets stuck a lot ... SANN was giving me garbage, but I didn't look too carefully at why). Also FWIW I suspect (but certainly can't prove) that we're into a domain where macpan2 and macpan1.5 are not fundamentally that different, and the problems we're having are more fundamental "calibration of high-dimensional not-very-numerically-stable objective functions is hard" territory ...

One potential quick-and-dirty fix would be to set up a range of randomly generated starting values (all reasonable, but covering a range) and look at the variation among the results.

@mayaearn
Copy link
Collaborator Author

@bbolker More support for your suspicion: Late last week, I tried to set up this calibration in MacPan1.5 to compare issues and though I wasn't facing exactly the same errors due to a few things I wasn't replicating perfectly, I was facing similar issues with calibration / volatile breakpoints.

I can likely set up the fix you suggest. When you say starting values, do you mean state values or simulation starting dates (or something else)?

@bbolker
Copy link
Collaborator

bbolker commented Jun 26, 2023

@stevencarlislewalker , I should probably spawn another issue.

What lme4 does is to specify that optimizers must conform to a particular interface (names of objective function, starting values, etc.; names of return values etc.), and provides wrappers for some of them. It doesn't work perfectly. optimx was also an attempt to solve this problem, although I have also found it clunky.

Built-in optimizers are ‘"Nelder_Mead"’, ‘"bobyqa"’ (from the ‘minqa’ package),
‘"nlminbwrap"’ (using base R's ‘nlminb’) and the default for
‘lmerControl()’, ‘"nloptwrap"’. Any minimizing function that
allows box constraints can be used provided that it (1) takes input parameters ‘fn’ (function to be optimized),
‘par’ (starting parameter values), ‘lower’ and ‘upper’
(parameter bounds) and ‘control’ (control parameters,
passed through from the ‘control’ argument) and
(2) returns a list with (at least) elements ‘par’ (best-fit
parameters), ‘fval’ (best-fit function value), ‘conv’
(convergence code, equal to zero for successful
convergence) and (optionally) ‘message’ (informational
message, or explanation of convergence failure).
Special provisions are made for ‘bobyqa’, ‘Nelder_Mead’, and
optimizers wrapped in the ‘optimx’ package; to use the
‘optimx’ optimizers (including ‘L-BFGS-B’ from base ‘optim’
and ‘nlminb’), pass the ‘method’ argument to ‘optim’ in the
‘optCtrl’ argument (you may need to load the ‘optimx’ package
manually using ‘library(optimx)’).

@bbolker
Copy link
Collaborator

bbolker commented Jun 26, 2023

@mayaearn : by starting values I meant the initial values of the parameters in the objective function. (As @stevencarlislewalker mentioned, sometimes starting state values are also parameters ...)

@mayaearn
Copy link
Collaborator Author

@bbolker This took longer than expected, but I have just pushed a document: misc/experiments/calibration/issues/CalibrationIssues.Rmd that produces a bunch of calibrations with different starting values. It seems that maybe the answer to the unrealistic fitted values during calibration issue is simply to not fit mu (proportion of symptomatic infections that are mild). The only reason I was fitting mu was because I was doing this when calibrating with macpan1.5. I want to add a few more examples to this document but have pushed what I have so far.

@stevencarlislewalker
Copy link
Member

Thanks @mayaearn . I believe that the reason for fitting mu was because when fitting to both case and hospital data you need some way to explain the proportion of cases that go to hospital and mu is a reasonable parameter for doing that. In my non-ww work for pho I absolutely needed to fit mu or something like it to include all of these data streams in the fit.

I can't dig in now but just wanted to share my hot take in case it helps.

@bbolker
Copy link
Collaborator

bbolker commented Jun 30, 2023

I have pushed a version with some amendments and comments. Some issues:

  • your code refers to a parameter file (and file path) I don't have (I've modified the code to skip this bit if the file is missing, it doesn't really hurt things, but would be good to fix)
  • whenever the convergence failure is due to "max evaluations reached" or "max iterations reached" you can easily (?) address the problem by increasing eval.max and iter.max inside the control= argument to nlminb. This obviously makes the fit take longer, and in the first example it seems like it will take a very large number of iterations/evaluations (I gave up after a while - I'm not sure where it's trying to go, haven't looked carefully), but it definitely leads to a sensible fit
  • I have a vague suspicion/hypothesis about disease severity (mu) being unidentifiable: it should only be identifiable when we are fitting to a data set that includes both cases (which gives some measure of overall infections) and hospitalizations. Otherwise (since in our model 'severe' is essentially a synonym for 'hospitalized') we have no way of estimating the ratio of hospitalizations to infections.

Another way to think about this: there is some unknown relationship between infections and wastewater concentrations (determined by viral shedding, dilution in the waste stream, etc etc) and some slightly less obscure relationship between infections and hospitalizations (determined by proportion symptomatic/proportion of symptomatic cases that are severe/hospital capacity etc.). If we only have WW and hospitalization, we can fit at most one of the parameters that govern these relationships.

I haven't looked to see which parameters you are fitting/allowing to vary, but my guess is that you have more than one of the ones described above ...

@mayaearn
Copy link
Collaborator Author

Thank you! I will take the documentation advice that you added in the comments, and I'm currently trying some increases in max iterations.

That makes sense; I am currently fitting both the shedding rate and mu, so going forward, I will not fit mu when fitting to WW and hospitalization.

@bbolker
Copy link
Collaborator

bbolker commented Jun 30, 2023

Out of curiosity, was this scenario (fitting to H and WW while estimating both shedding rate and symptom severity prob) something that was working with Macpan 1.5? It would be somewhat surprising ...

@mayaearn
Copy link
Collaborator Author

mayaearn commented Jul 3, 2023

You are correct; this scenario doesn't seem to work in MacPan 1.5 either. When I was fitting mu last year using MacPan 1.5, I was also calibrating with report data. When I calibrate only with hospitalizations and waste, mu fits okay but the waste parameters fit very badly (which makes sense given your explanation above).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
calibration Component: Model calibration tools
Projects
Status: 🔖 Ready
Development

No branches or pull requests

3 participants