This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Modifying Knowledge Gradient for time-dependent kernels #578
Comments
This is an interesting problem. It's quite related to the multi-fidelity setting, where we take a measurement at some fidelity and then project to a "target fidelity". This is done in I imagine you can do something similar where you essentially return the posterior of the fantasy model evaluated at time I'm pretty swamped right now, but i can take a look later this week - hope the pointers above are helpful in the meantime. |
Thanks for the pointers -- I will give it a shot. Whenever you get the time, an illustration with some toy code would be great, since I am very new to the Pytorch paradigm. |
Hi @r-ashwin. You can achieve this by passing the See the following simple example where I pretend that
|
@saitcakmak Thanks for the tip -- I was not aware of the |
Let me see if I understand this correctly. You want KG to be evaluated using fantasies conditioned on some In that case, a simple wrapper around qKG (below) may work. You could probably achieve this by passing an
Ps. This wrapper approach is not fully compatible with the heuristic for generating the inner solutions. The heuristic would maximize |
Yes, my fantasy model is conditioned at |
Alternatively to the |
@saitcakmak The issue with the approach suggested above is that it wouldn't fantasize from
Hope this helps. |
@Balandat, it would work properly if used with I like your approach better since it is less hacky. One issue I see with it is that it loses the smart heuristic used to generate the |
@saitcakmak So it looks like I cannot use Update:I get the same error when using the multifidelity qKG. Happy to share that code as well if necessary, but did not want to clutter the space. It looks like the common problem for both cases is that the
|
It looks like I also noticed that you're using |
|
Actually, subclassing If I may suggest, forward compatibility between any |
@saitcakmak Just to make sure I understood correctly, in your |
I think you're running into a bug that I introduced here: botorch/botorch/optim/initializers.py Line 328 in 7e2a404
That line would raise an @Balandat, is it the case that
The
You cannot modify
To make sure inner optimization is done at T, you could pass In your case, it is much cleaner to use
You could then wrap Note: You cannot use |
Yes, this is a goal, but unfortunately that's not easy to do since the basic GPyTorch models don't carry around some of the metadata that we need in botorch. For instance, GPyTorch models don't always use an explicit outcome dimension, which makes their interpretation ambiguous without additional information. We can and should work on minimizing the discrepancies here, and there are probably some aspects that we can fix. But I fear that at least right now supporting fully generic GPyTorch model plug-in seems quite challenging. |
@saitcakmak Got it - thanks for the clarification. As of now both approaches suggested ( |
I tried the following but get an error. Per the API reference,
|
Hmm looks like this is some interaction between KG and FixedFeatureAcquisitionFunction. I don't see anything obviously wrong with your code, let me take a closer look tomorrow. |
I am guessing the bounds you use below are 2-dim, and so
|
Alternatively, this works (notice that I have used
|
@saitcakmak tried that and this is what I got. Looks like most of my issues come down to tensor dimension mismatch. Is there a place I can find all the definitions of tensor shape concretely defined? Or is defining all my tensors e.g., train_x and test_x, with shape
|
Oh, I see what is going on with this one.
This here queries
Yes, this should be equivalent. There are differences in how |
Your explanation makes sense and it does remove the error when |
I have a follow-up question, if you have any thoughts on this. Placing it here since it is related to the original question. Can I fantasize at multiple Is this related to the |
I can interpret this in two ways: i) you want to jointly fantasize at
Based on this documentation, assuming
|
@saitcakmak Awesome! Thanks! |
@saitcakmak When I use |
The expected behavior is that all calls to the
botorch/botorch/acquisition/knowledge_gradient.py Lines 405 to 407 in 7e2a404
Using fixed_features , when called from within gen_candidates_scipy , you will have X_eval[..., 1] = t , where X_eval is n x 1 x 2 . expand(X_eval) will then be n x 4 x 2 (following my_expand definition above) with expand(X_eval)[..., 0, 1] = t , expand(X_eval)[..., 1, 1] = t1 , expand(X_eval)[..., 1, 1] = t2 and expand(X_eval)[..., 1, 1] = t3 . Any fantasy model generated here will be jointly over these four solutions.
|
@saitcakmak , two remarks:
For V&V purposes, how can I ensure the output of |
Setting a larger optimization budget for
BoTorch doesn't implement this. You could easily write your own KG implementation. The class NestedKG(MCAcquisitionFunction):
def __init__(...):
# define your init here, you can mostly copy qKG
def forward(X: Tensor) -> Tensor:
qKnowledgeGradient.evaluate(self, X) # passing self here is crucial If the recommendations here do not solve the issue and you think there is a different bug in play, I'd be happy to look into it deeper if you share a reproducible example. |
Okay I will prepare a reproducible example and drop it here. One thing that is worth clarifying before that is that in the In this regard, passing an
Thanks for all your responses so far -- they were very useful! |
If you use You can install #594 via |
I see - thanks! Let me try both and see how it goes. Update: I was able to check that your implementation in #594 does indeed what I want. However, I am not sure I am able to see that the |
@Balandat @saitcakmak |
@r-ashwin I think the discrepancy you observe between I ran additional testing with
That is correct. You'd have to modify it and re-evaluate the inner solutions with botorch/botorch/acquisition/knowledge_gradient.py Lines 244 to 252 in d3d4497
Would be replaced by something like
I haven't tested this, but it should give the general idea. I've used similar implementations in the past, but they tend to be significantly slower than the one-shot approach. |
That's interesting because I am indeed subclassing PS: thanks also for the gradient tip. From what I have tested so far, the |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Issue description
I want to modify KG for time-dependent problems as follows. Given
x in X
(some compact space) and0 <= t <= T
, I have a GP model with priorGP(mu, k_xt)
, wherek_xt = k_x * k_t
withk_x
capturing covariance in 'x' space andk_t
in 't' space. At timet
I have dataD_t = {(x_i, t_i), y_i }, i=1,...,n
andt>t_n
. I want to define KG as followsa_KG(x, t) = E_x'[max_x' mu(x', T) | {(x, t), y_i}]
where y_i is sampled from
GP(mu(x, t), k_xt) | D_t)
. In other words, my 'fantasy model' is at current timet
however, my 'inner optimization' problem maximizes the posterior atT
predicted via the fantasy model. Also my acquisition functiona_KG
is defined att
Question: How should I modify the
qKnowledgeGradient
class to achieve this, so I can take advantage of the efficient one-shot implementation of qKG? I have provided code for the GP I am using if you want to work with that.Any help is greatly appreciated! Please let me know if you need more information. Thanks!
(apologies for trying to write equations in Markdown)
System Info
Please provide information about your setup, including
The text was updated successfully, but these errors were encountered: