-
Notifications
You must be signed in to change notification settings - Fork 381
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[performance issue] model.fantasize() is significantly slower on GPU #492
Comments
Thanks for raising this, this is an upstream issue that we are aware of: cornellius-gp/gpytorch#1157 Really, it is a pytorch issue with |
Thanks for quick response Max!
fixes the issue. It reduced the runtime from ~10000 ms to ~33 ms. |
@Balandat |
Yeah that makes sense to me. One thing I do want to do once #1102 goes in is to just check whether |
Issue description
Generating fantasy models using
model.fantasize()
takes significantly longer on GPU compared to CPU. The example below is extracted from evaluation ofraw_samples
while optimizingqKnowledgeGradient
. Running the code below, I get ~60 ms using CPU and ~10000 ms using GPU. I traced the issue down togpytorch.models.exact_prediction_strategies.py
line 220Q, R = torch.qr(new_root)
. That line appears to be the bottleneck, however, I do not know what is happening beyond there.Code example
Run the code below with
device = torch.device('cuda')
anddevice = torch.device('cpu')
.System Info
Please provide information about your setup, including
The text was updated successfully, but these errors were encountered: