You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I expect the same result in each iteration. The bug appears only when I try to evaluate gradient at one input point. If I use batch size greater than 1:
x_test=torch.randn(2, 1)
x_test.requires_grad_(True)
gp.eval()
# Calculate gradient w.r.t. to the same input point 5 timesfor_inrange(5):
loss=gp(x_test).mean.sum()
loss.backward()
print(x_test.grad)
x_test=x_test.detach()
x_test.requires_grad_(True)
Yeah there was a bug in pytorch鈥檚 cdist function. Try running this on the latest gpytorch master, that should fix this (by not using torch.cdist st all).
馃悰 Bug
When I try to calculate the gradient of loss w.r.t. input, I get different results every run when I pass one point to GP.
To reproduce
The output looks like this
Expected Behavior
I expect the same result in each iteration. The bug appears only when I try to evaluate gradient at one input point. If I use batch size greater than 1:
The output will be correct
System information
The text was updated successfully, but these errors were encountered: