Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] ProductStructureKernel not working on inputs $(x_1, x_2)$ of different shapes #2433

Open
mngom2 opened this issue Nov 2, 2023 · 0 comments
Labels

Comments

@mngom2
Copy link

mngom2 commented Nov 2, 2023

馃悰 Bug

Hi !
I am evaluating a ProductStructureKernel module on inputs of different shapes and I get the following error.

To reproduce

import torch
import gpytorch


use_cuda = torch.cuda.is_available()
device =torch.device('cuda' if use_cuda else 'cpu')
class ExactGPModel(gpytorch.models.ExactGP):
    def __init__(self, train_x, train_y, likelihood):
        super(ExactGPModel, self).__init__(train_x, train_y, likelihood)
        self.mean_module = gpytorch.means.ConstantMean()
        self.base_covar_module = (gpytorch.kernels.RBFKernel())
        self.covar_module = gpytorch.kernels.ProductStructureKernel(
            gpytorch.kernels.ScaleKernel(
                gpytorch.kernels.GridInterpolationKernel(self.base_covar_module, grid_size=100, num_dims=1)
            ), num_dims=32
        )

    def forward(self, x):
        mean_x = self.mean_module(x)
        covar_x = self.covar_module(x)
        return gpytorch.distributions.MultivariateNormal(mean_x, covar_x)


def train(train_x, train_y, num_training_iter):


    likelihood =  gpytorch.likelihoods.GaussianLikelihood()
    model = ExactGPModel(train_x, train_y, likelihood)
    model.double()
    likelihood.double()
    model = model.to(device)
    likelihood = likelihood.to(device)

    model.train()
    likelihood.train()

    optimizer = torch.optim.Adam(model.parameters(),  lr=0.1)
    mll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)

    for i in range(num_training_iter):
        optimizer.zero_grad()
        with gpytorch.settings.use_toeplitz(False), gpytorch.settings.max_root_decomposition_size(30):
            output = model(train_x)
            loss = mll(output, train_y)
            loss = -1. * loss
            loss.backward()
        optimizer.step()
    return model, likelihood


x1 = torch.rand(100,32).to(device)
train_y = torch.sum(torch.sin(x1).to(device), 1)
print(train_y.shape)
num_training_iter = 100
model, likelihood = train(x1, train_y, num_training_iter)
model.eval()
likelihood.eval()
K = model.covar_module
C11 =  K.forward(x1, x1) # x1.shape = (100, 32)
x2 = torch.rand(10,32)
C12 = K.forward(x1, x2) # x2.shape = (10,32) #this is where the error occurs

I get the following error

** Stack trace/error message **

Traceback (most recent call last):
  File "test_structure_kernel.py", line 61, in <module>
    C12 = K.forward(x1, x2) # x2.shape = (10,32)
  File "~/gpytorch/kernels/product_structure_kernel.py", line 66, in forward
    res = res.prod(-2 if diag else -3)
  File "~/linear_operator/operators/_linear_operator.py", line 1881, in prod
    return self._prod_batch(dim)
  File "~/linear_operator/operators/_linear_operator.py", line 601, in _prod_batch
    roots = self.root_decomposition().root.to_dense()
  File "~/linear_operator/utils/memoize.py", line 59, in g
    return _add_to_cache(self, cache_name, method(self, *args, **kwargs), *args, kwargs_pkl=kwargs_pkl)
  File "~/linear_operator/operators/_linear_operator.py", line 1997, in root_decomposition
    raise RuntimeError(
RuntimeError: root_decomposition only operates on (batches of) square (symmetric) LinearOperators. Got a LazyEvaluatedKernelTensor of size torch.Size([32, 100, 10]).

Expected Behavior

Output a nonsquare matrix corresponding to the input dimensions/

System information

  • GPyTorch Version 1.9.0
  • PyTorch Version 1.12.0a0+git664058f
@mngom2 mngom2 added the bug label Nov 2, 2023
@mngom2 mngom2 changed the title [Bug] [Bug] ProductStructureKernel not working on inputs $(x_1, x_2)$ of different shapes Nov 7, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant