Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Query regarding custom Non-linear kernel implementation. [ Multioutput Vairational Approximation ] #1843

Closed
Ganesh1009 opened this issue Nov 30, 2021 · 2 comments

Comments

@Ganesh1009
Copy link

Hi,
I am very new to the GPYTorch library and was looking at the examples related to custom kernel creation. My overall goal is to create a non-stationary kernel ( with multiple lengthscales and independent multioutput features ).

I understood that K' = K(x-x') * K(x+x') is a non-stationary kernel ( I am using K as RBF kernel ). I created K(x+x') by just negating the x' ( torch.neg(x') ) and passed it to the covar_dist() function provided by the gpytorch library.

I want to implement something as below:

            self.covariance_module = gpytorch \
                .kernels \
                .ScaleKernel (
                    base_kernel = gpytorch
                    .kernels
                    .RBFKernel ( batch_shape = torch.Size ( [ config [ 'Dy' ] ] ),
                                 ard_num_dims = config [ 'Dy' ] ),
                    batch_shape = torch.Size ( [ config [ 'Dy' ] ] )
                )

            # Below definition fails with error :
            # Matrix not positive definite after repeatedly adding jitter up to 1.0e-06. 
            self.covariance_module_ = gpytorch \
                .kernels \
                .ScaleKernel (
                    base_kernel = TestKernel ( batch_shape = torch.Size ( [ config [ 'Dy' ] ] ) ,
                                               ard_num_dims = config [ 'Dy' ] ) ,
                    batch_shape = torch.Size ( [ config [ 'Dy' ] ] )
                )
          # Below definition works fine.
          # self.covariance_module_ = TestKernel ()      

and in the forward() method I am multiplying two kernels as below:

covariance_x = self.covariance_module ( x ) * self.covariance_module_ ( x )

My K(x+x') kernel function is as below:

class TestKernel ( gpytorch.kernels.Kernel ) :

    has_lengthscale = True

    def forward ( self, x1, x2, diag = False, last_dim_is_batch = False, **params ):

        x1_ = x1.div ( self.lengthscale )
        x2_ = x2.div ( self.lengthscale )

        diff = self.covar_dist ( x1_, 
                                           torch.neg ( x2_ ), square_dist = True, diag = diag, postprocess = True, 
                                           dist_postprocess_func = postprocess_rbf, **params )

        return diff

Thank you very much in advance for your reply.

@Ganesh1009 Ganesh1009 changed the title Query regarding custom kernel implementation. Query regarding custom Non-linear kernel implementation. [ Multioutput Vairational Approximation ] Dec 10, 2021
@gpleiss
Copy link
Member

gpleiss commented Dec 21, 2021

@Ganesh1009 sorry for the slow reply!

You should be able to accomplish this with the following:

class TestKernel(gpytorch.kernels.RBFKernel):
    # ...

    def forward ( self, x1, x2, diag = False, last_dim_is_batch = False, **params ):
        return (
           super().forward(x1, x2, diag=diag, last_dim_is_batch),
           super().forward(x1, -x2, diag=diag, last_dim_is_batch),
        )

@Ganesh1009
Copy link
Author

@gpleiss ,
thank you very much for the reply.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants