Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initial commit of pivoted Cholesky algorithm from GPyTorch #63

Merged
merged 24 commits into from Oct 24, 2022

Conversation

kunalghosh
Copy link
Contributor

@kunalghosh kunalghosh commented Aug 2, 2022

What is this PR about?

Fast exact Gaussian processes (Gardner et.al 2018) works on a modified batch conjugate gradient descent algorithm which needs a function to return the preconditioning matrix, given a kernel. This PR implements the function to compute the preconditioning matrix.

This produces the same results as gpytorch.pivoted_cholesky()

Test kernel matrices can be generated using the following code which generates random positive semi-definite matrices

import torch 
import gpytorch
import numpy as np

# Test PSD Matrix

N = 10
rank = 5
np.random.seed(1234) # nans with seed 1234
K = np.random.randn(N, N)
K = K @ K.T + N * np.eye(N)
K_torch = torch.from_numpy(K)

Now passing the same kernel matrix to the two functions will yield the same output.

import pymc_experimental as pymx

L_gpt = gpytorch.pivoted_cholesky(K_torch, rank=rank, error_tol=1e-3)
L_np, _  = pymx.utils.pivoted_cholesky(K, max_iter=rank, error_tol=1e-3)
assert np.allclose(L_gpt, L_np.T) == True, "BUG: The two cholesky decompositions are not same !"

@kunalghosh kunalghosh marked this pull request as draft August 2, 2022 15:29
@kunalghosh
Copy link
Contributor Author

@bwengals I have copied this PR over from pymc

Comment on lines 2 to 4
import torch

from gpytorch.utils.permutation import apply_permutation
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think It'd be best to make torch and gpytorch optional imports.

try:
    import torch
    import gpytorch
except ImportError as e:
    raise ImportError("message")

That way they don't need to be dependencies for someone who wan'ts to use pymc_experimental for something else.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've looked into the apply_permutation function, I think it is better to avoid the dependency and implement the same function here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ferrine Yes, I plan to use the gpytorch.utils.permutation apply_permutation temporarily and implement the apply_permutation function once I have GP inference working on the GPU.

I have now added the try...catch checks. Is it ok if I check-in this ? I have created a n issue for the same #85

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

100% agree @ferrine, longer term that'll have to implemented here, but this PRs more at the proof-of-concept level so lot's of time to go back over those details later. With the try...except it shouldn't get in anyones way.

@bwengals
Copy link
Contributor

bwengals commented Aug 6, 2022

Same comments as the other PR, ready to be taken off of draft?

@@ -0,0 +1,24 @@
# try:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why you comment the test?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was triggering the import of GPyTorch and Torch which would cause the tests to fail. I thought it would be best to comment out the tests until I remove the dependency of my code on GPyTorch and Torch.

import torch
from gpytorch.utils.permutation import apply_permutation
except ImportError as e:
# print(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

instead of a print use raise ImportError(...)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

definitely, also consider raise ImportError(...) from e https://stackoverflow.com/questions/24752395/python-raise-from-usage

pp = lambda x: np.array2string(x, precision=4, floatmode="fixed")


def pivoted_cholesky(mat: np.matrix, error_tol=1e-6, max_iter=np.Infinity):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

me being annoying: np.inf

Copy link
Contributor

@bwengals bwengals left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great @kunalghosh! Feel free to hit the green button whenever you're ready

import torch
from gpytorch.utils.permutation import apply_permutation
except ImportError as e:
# print(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

definitely, also consider raise ImportError(...) from e https://stackoverflow.com/questions/24752395/python-raise-from-usage

@bwengals
Copy link
Contributor

bwengals commented Oct 1, 2022

Sorry, didn't notice the conflict on pymc_experimental/utils/__init__.py. You'll have to resolve that before merging.

@bwengals bwengals merged commit 307913d into pymc-devs:main Oct 24, 2022
@bwengals
Copy link
Contributor

nice job @kunalghosh, sorry for the delay on my side. This is good progress.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants