Does botorch support nested models? #3012
-
Hi everybody, I'm currently trying to implement a multi-fidelity BO with 2 discrete fidelity levels. A popular bayesian model for this is to fit a GP to the low-fidelity data as f_L(x) and then assume that the high-fidelity data can be modeled as f_H(x) = \rho f_L(x) + delta(x) where rho is a learned constant and delta is a GP modeling the residuals. After the low-fidelity GP is fit, rho and delta are fitted simulatenously using joint MLE. Is such a fitting procedure possible to implement with a botorch custom model? All the tutorials seem to have "monolithic" models that are initialized and then fit_gpytorch_mll is run once, which conflicts with this 2 stage fitting. Any info is welcome, thanks! |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments
-
Hi @Frxljord. We have a model that does something very similar to what you're describing. The method is described in this paper: https://dl.acm.org/doi/abs/10.1145/3690624.3709419. The model is trained by first training the source (low-fidelity in your case) models and later training the weights jointly with the residual GP for the target inference. I've exported the implementation in #3013 |
Beta Was this translation helpful? Give feedback.
-
Hi, thank you! Your method does indeed reduce to the f_H(x) = \rho f_L(x) + delta(x) formulation if we use only one pre-trained GP. Thanks! |
Beta Was this translation helpful? Give feedback.
Hi @Frxljord. We have a model that does something very similar to what you're describing. The method is described in this paper: https://dl.acm.org/doi/abs/10.1145/3690624.3709419. The model is trained by first training the source (low-fidelity in your case) models and later training the weights jointly with the residual GP for the target inference. I've exported the implementation in #3013