Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
U(1,1) is zero, singular U with GP Kernel Prior #1863
I'm trying to sample a GP regressor prior distribution placed over the length scale of the RBF kernel, just like is done in this example (see code sample below): http://docs.pyro.ai/en/stable/contrib.gp.html#pyro.contrib.gp.models.model.GPModel
However, I'm repeatedly running into the following error during sampling:
Depending on what parameters I choose for the prior distribution, this can happen part-way through sampling or it can happen during the very first sampling attempt. It seems to happen pretty consistently, however, across a variety of different prior distribution specifications.
I've been looking around the documentation, but don't see any obvious workaround such as a way to add a small constant to the potential energy or to allow the sampler to handle exceptions gracefully. Is there something I'm missing?
X1, y1 = make_regression(n_samples=10, n_features=3, n_informative=3, noise=.2) ... # standardize data, wrap in tensors kernel = gp.kernels.RBF(input_dim=X1.shape) kernel.set_prior("variance", dist.Uniform(torch.tensor(0.5), torch.tensor(1.5))) kernel.set_prior("lengthscale", dist.Gamma(torch.tensor(7.5), torch.tensor(1.))) gpr = gp.models.GPRegression(X1, y1, kernel, noise=torch.tensor(.01)) hmc_kernel = mcmc.HMC(gpr.model) mcmc_run = mcmc.MCMC(hmc_kernel, num_samples=200) posterior_ls_trace =  # store lengthscale trace ls_name = "GPR/RBF/lengthscale" for trace, _ in mcmc_run._traces(): # exception raised here while sampling, before entering loop posterior_ls_trace.append(trace.nodes[ls_name]["value"].item())
@joeddav The error happens at the warmup phase (before enter the loop). It turns out that we have to force
Let me know if it solves your issue. :)
Another solution is to ask PyTorch folks not trigger ValueError when applies Cholesky to singular matrix, but returns