You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using the example code for GP regression (updated to use the master branch of gpytorch) and manually setting max_ls=1 and lr=0.01 to force linesearch failure:
import math
import torch
import gpytorch
from matplotlib import pyplot as plt
sys.path.append('../../../PyTorch-LBFGS/functions/')
from LBFGS import FullBatchLBFGS
# Training data is 11 points in [0,1] inclusive regularly spaced
train_x = torch.linspace(0, 1, 100)
# True function is sin(2*pi*x) with Gaussian noise
train_y = torch.sin(train_x * (2 * math.pi)) + torch.randn(train_x.size()) * 0.2
# We will use the simplest form of GP model, exact inference
class ExactGPModel(gpytorch.models.ExactGP):
def __init__(self, train_x, train_y, likelihood):
super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
self.mean_module = gpytorch.means.ConstantMean()
self.covar_module = gpytorch.kernels.ScaleKernel(gpytorch.kernels.RBFKernel())
def forward(self, x):
mean_x = self.mean_module(x)
covar_x = self.covar_module(x)
return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)
# initialize likelihood and model
likelihood = gpytorch.likelihoods.GaussianLikelihood()
model = ExactGPModel(train_x, train_y, likelihood)
# Find optimal model hyperparameters
model.train()
likelihood.train()
# Use full-batch L-BFGS optimizer
optimizer = FullBatchLBFGS(model.parameters(), lr=0.1)
# "Loss" for GPs - the marginal log likelihood
mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)
# define closure
def closure():
optimizer.zero_grad()
output = model(train_x)
loss = -mll(output, train_y)
return loss
loss = closure()
loss.backward()
training_iter = 10
for i in range(training_iter):
# perform step and update curvature
options = {'closure': closure, 'current_loss': loss, 'max_ls': 1}
loss, _, lr, _, F_eval, G_eval, _, _ = optimizer.step(options)
print('Iter %d/%d - Loss: %.16f - LR: %.3f - Func Evals: %0.0f - Grad Evals: %0.0f - Raw-Lengthscale: %.16f - Raw_Noise: %.16f' % (
i + 1, training_iter, loss.item(), lr, F_eval, G_eval,
model.covar_module.base_kernel.raw_lengthscale.item(),
model.likelihood.raw_noise.item()
))
Is the inplace option set to True? (It is on by default.)
If the inplace option is set to True, then the algorithm only tracks the search direction and current and previous steplengths, so it attempts to recover the original iterate by updating the parameters by -alpha_k p_k. This may introduce numerical error which will lead to a slight perturbation of the original iterate. (This was designed with neural networks in mind where storing and constantly reloading the original set of parameters may not be ideal.)
If you set inplace to be False, then the algorithm will store the original parameters (the current iterate) and reload those parameters at every line search iteration.
Using the example code for GP regression (updated to use the master branch of gpytorch) and manually setting
max_ls=1
andlr=0.01
to force linesearch failure:I get the following output
Clearly the linesearch is failing and the lr is set to 0. But why are the parameters still changing?
The text was updated successfully, but these errors were encountered: