You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I am new to GPyTorch. I am running the LMC example in Jupiter Notebook provided in docs link.
In part "Train the Model," I was trying to turn off the gradient tracking for the outputscale parameter in the kernel and the noise parameter in the likelihood using
It turned out that for the LMC model, I couldn't turn off the gradient tracking for the outputscale as expected. I printed the requires_grad attribute before and after the line
output=model(train_x)
and it seems that the gradient will be turned on internally again in lmc_variational_strategy in the line
# this is for running the notebook in our testing frameworkimportossmoke_test= ('CI'inos.environ)
num_epochs=1ifsmoke_testelse500model.covar_module.raw_outputscale.requires_grad=Falselikelihood.raw_noise.requires_grad=Falsemodel.train()
likelihood.train()
optimizer=torch.optim.Adam([
{'params': model.parameters()},
{'params': likelihood.parameters()},
], lr=0.1)
# Our loss object. We're using the VariationalELBO, which essentially just computes the ELBOmll=gpytorch.mlls.VariationalELBO(likelihood, model, num_data=train_y.size(0))
# We use more CG iterations here because the preconditioner introduced in the NeurIPS paper seems to be less# effective for VI.epochs_iter=tqdm.tqdm_notebook(range(num_epochs), desc="Epoch")
foriinepochs_iter:
# Within each iteration, we will go over each minibatch of dataoptimizer.zero_grad()
print(model.covar_module.raw_outputscale.requires_grad, "before") # printprint(likelihood.raw_noise.requires_grad, "noise_before")
output=model(train_x)
print(model.covar_module.raw_outputscale.requires_grad, "after") # printprint(likelihood.raw_noise.requires_grad, "noise_after")
loss=-mll(output, train_y)
epochs_iter.set_postfix(loss=loss.item())
loss.backward()
optimizer.step()
Thanks!
The text was updated successfully, but these errors were encountered:
Hi, I am new to GPyTorch. I am running the LMC example in Jupiter Notebook provided in docs link.
In part "Train the Model," I was trying to turn off the gradient tracking for the outputscale parameter in the kernel and the noise parameter in the likelihood using
It turned out that for the LMC model, I couldn't turn off the gradient tracking for the outputscale as expected. I printed the
requires_grad
attribute before and after the lineand it seems that the gradient will be turned on internally again in lmc_variational_strategy in the line
The Jupiter cell I used is the following:
Thanks!
The text was updated successfully, but these errors were encountered: