Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kernel class for arc kernel #1027

Merged
merged 34 commits into from May 20, 2020
Merged

Kernel class for arc kernel #1027

merged 34 commits into from May 20, 2020

Conversation

BCJuan
Copy link
Contributor

@BCJuan BCJuan commented Jan 17, 2020

Hi,

I have finally cleaned my implementation. Now I am able to make the PR.

I have reproduced the example of (Exact GPs)[https://gpytorch.readthedocs.io/en/latest/examples/01_Exact_GPs/Simple_GP_Regression.html] with the kernel and seems to work fine. I can upload the notebooks, or something similar; as you wish.

I do only worry about the kernel size definition. Now is simply a vector of the number of dimensions, but maybe would have to be something like

        self.register_parameter(
            name="raw_angle",
            parameter=torch.nn.Parameter(torch.zeros(*self.batch_shape, 1, self.ard_num_dims))
            )

I have tried to implement it in the tutorial [Botorch with Ax],(https://botorch.org/tutorials/custom_botorch_model_in_ax) but there are numerical stability problems.

I will post this info also in the issue #1023

Hope I have done this process in an appropriate manner.

Thanks!

Copy link
Member

@jacobrgardner jacobrgardner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me other than a few pieces of feedback (see above).

gpytorch/kernels/arc_kernel.py Show resolved Hide resolved
gpytorch/kernels/arc_kernel.py Outdated Show resolved Hide resolved

def forward(self, x1, x2, diag=False, **params):
x1_, x2_ = self.embedding(x1), self.embedding(x2)
return self.base_kernel(x1_, x2_)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You might need to pass diag onwards in the base kernel. Something like:

Suggested change
return self.base_kernel(x1_, x2_)
return self.base_kernel(x1_, x2_, diag=diag)

You also may need to just verify batch support. If you'd like, you can extend our base kernel test case here:

class BaseKernelTestCase(object):

If you fill in the two abstract methods, for example like here:

class TestRBFKernel(unittest.TestCase, BaseKernelTestCase):
def create_kernel_no_ard(self, **kwargs):
return RBFKernel(**kwargs)
def create_kernel_ard(self, num_dims, **kwargs):
return RBFKernel(ard_num_dims=num_dims, **kwargs)

then the base kernel test case tests generally all the settings your kernel might get called with across all sorts of GPyTorch models. If you pass all of those tests, then you would know the kernel is in pretty good shape.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am going to try to build the test cases. Asa I can I will upload them. Thank you very much :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, the kernel now passes the tests. There were problems with adding self.ard_num_dims as last dimension for the tensor registerer of radius and angle. I have put a conditional to solve this, but in theory the arc kernel should be used to always with ard_num_dims; at least is what makes sense to me. By the way, I have also included the test, but then the checks fail because it cannot find the arc kernel. Should I add also the test or should I add only the kernel?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @BCJuan,

You need to add something like from .arc_kernel import ArcKernel to gpytorch/kernels/__init__.py. This will enable from gpytorch.kernels import ArcKernel to work, and will also make your test pass.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aps, still wondering how could not see that. My apologies and thank you for the clarification. Fixed and added.

Copy link
Contributor Author

@BCJuan BCJuan Feb 10, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking that to further check the functionality of the arc kernel I could do something of what is found in test_rbf_kernel.py in functions such as test_ard or test_ard_batch, meaning checking an actual computation with the kernel implementation. As in rest_rbf_kernel they would go in the test for the arc kernel. What do you think?

Copy link
Contributor Author

@BCJuan BCJuan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The kernel passes the standard tests. What do I have to do for the pull request to be accepted?

Thank you for your patience and I do apologize for all the mess with the commits and all.

Copy link
Member

@jacobrgardner jacobrgardner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@BCJuan This looks good to me now!

@Balandat
Copy link
Collaborator

Should we remove this kind of metadata for consistency w/ the rest of the codebase?

# -*- coding: utf-8 -*-
"""
Created on Tue Jan  7 11:13:37 2020
@author: blue
"""

@BCJuan BCJuan force-pushed the arc_kernel branch 3 times, most recently from 7eec2b1 to b4bbe26 Compare February 24, 2020 14:46
@BCJuan
Copy link
Contributor Author

BCJuan commented Feb 24, 2020

Hi,

I do apologize if this is maybe a noob question but I do not know how to proceed. I am trying to merge master into arc_kernel and delete the metadata that @Balandat has indicated but the build fails in travis in the pytorch=MASTER section. The error is related to the 'test_multitask_gaussian_likelihood.py` and related files. Is there something that I am missing or doing wrong? Thanks and I do apologize for the mess of resets.

@Balandat
Copy link
Collaborator

@BCJuan This is likely unrelated to your changes and due to a bug in pytorch (can't see the exact failure, but we ran into this too): pytorch/pytorch#33651. This should be fixed relatively soon.

@jacobrgardner
Copy link
Member

Those tests should pass now as of #1058

@BCJuan
Copy link
Contributor Author

BCJuan commented May 14, 2020

I have added the proper delta functions that select the dimensions and solved the interval problem for the radius parameter. This kernel should be all usable. The default delta selects all dimensions even if they should not appear in the space configuration.

Meaning if in a neural network, for example, we have four layers but we are modelling the number of neurons in a total of 6 layers. The parameters for the number of neurons in the five and sixth layers should not be modelled. In the default delta function they are; if the user enters a proper delta they are modelled. The delta is a mask for the input.

From the original paper,
arc

@gpleiss
Copy link
Member

gpleiss commented May 19, 2020

@BCJuan - I just made sure this will work with our docs. Will merge later today :)

@gpleiss gpleiss merged commit 590cd3b into cornellius-gp:master May 20, 2020
@gpleiss
Copy link
Member

gpleiss commented May 20, 2020

Thanks so much @BCJuan for the PR! Glad we finally got it in :D

@BCJuan
Copy link
Contributor Author

BCJuan commented May 20, 2020

@gpleiss My pleasure. By the way, if there is anything you want to implement but you do not have time or cannot by any other reason, I would more than gladly like to collaborate. I have certain free time which I can devote to this tasks. Of course, depending on the task I might be able or not to do it, or take me more time than expected, but I that case I would tell you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants