Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add base_conditional_with_lm function #1528

Merged
merged 7 commits into from Aug 3, 2020

Conversation

nfergu
Copy link
Contributor

@nfergu nfergu commented Jul 9, 2020

PR content:

Title

Allow Lm to be passed into utility functions for computation of conditionals.

Description

This PR adds a new utility method: base_conditional_with_lm in conditionals/util.py which accepts Lm instead of Kmm. This enables the performance of, for example, prediction to be improved by passing a pre-computed value for Lm into this function, instead of calculating it each time.

This is a hopefully (improved) version of #1514, which uses two functions instead of trying to put both Kmm and Lm into a single function.

Minimal working example

Here is a rough example of how I would use the new base_conditional_with_lm function in my model. This improves prediction time by about 10x.

class MySVGP(SVGP):

    def __init__(self, kernel, likelihood, inducing_variable, *, mean_function=None,
                 num_latent_gps: int = 1, q_diag: bool = False, q_mu=None, q_sqrt=None,
                 whiten: bool = True, num_data=None):
        super().__init__(kernel, likelihood, inducing_variable, mean_function=mean_function,
                         num_latent_gps=num_latent_gps, q_diag=q_diag, q_mu=q_mu, q_sqrt=q_sqrt,
                         whiten=whiten, num_data=num_data)
        self._Kmm_for_prediction = None  # [M, M]
        self._Lm_for_prediction = None  # [M, M]

    def train(self):
        # Do training here...
        self._Kmm_for_prediction = covariances.Kuu(self.inducing_variable,
                                                   self.kernel, jitter=default_jitter())
        self._Lm_for_prediction = tf.linalg.cholesky(self._Kmm_for_prediction)

    def predict_f(self, Xnew: InputData, full_cov=False, full_output_cov=False) -> MeanAndVariance:
        if self._Kmm_for_prediction is None or self._Lm_for_prediction is None:
            raise ValueError('Kmm or LM is not set. Use the train method to train the model.')
        mu, var = self.compute_conditional(Xnew, full_cov, full_output_cov)
        return mu + self.mean_function(Xnew), var

    def compute_conditional(self, Xnew, full_cov, full_output_cov):
        Kmn = Kuf(self.inducing_variable, self.kernel, Xnew)  # [M, N]
        Knn = self.kernel(Xnew, full_cov=full_cov)
        fmean, fvar = base_conditional_with_lm(
            Kmn, self._Lm_for_prediction, Knn, self.q_mu, full_cov=full_cov,
            q_sqrt=self.q_sqrt, white=self.whiten
        )  # [N, R],  [R, N, N] or [N, R]
        return fmean, expand_independent_outputs(fvar, full_cov, full_output_cov)

@nfergu nfergu changed the title Neil/conditional util new Add base_conditional_with_lm function Jul 9, 2020
@nfergu
Copy link
Contributor Author

nfergu commented Jul 9, 2020

@st-- Based on your feedback on #1514 I have created a new PR to try and solve this.

I thought that instead of the approach you suggested of having a Kmm_is_cholesky flag it might be cleaner to have two separate functions (one of which calls the other). What do you think of this approach?

If you think this is a good approach then I can add some tests and hopefully we can get this merged.

@codecov
Copy link

codecov bot commented Jul 9, 2020

Codecov Report

Merging #1528 into develop will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff            @@
##           develop    #1528   +/-   ##
========================================
  Coverage    95.30%   95.31%           
========================================
  Files           85       85           
  Lines         3815     3817    +2     
========================================
+ Hits          3636     3638    +2     
  Misses         179      179           
Impacted Files Coverage Δ
gpflow/conditionals/util.py 76.36% <100.00%> (+0.21%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update d995194...b38f351. Read the comment docs.

@st-- st-- self-requested a review July 16, 2020 10:17
@nfergu
Copy link
Contributor Author

nfergu commented Jul 22, 2020

@st-- Any thoughts on this approach? We'd like to incorporate this GPflow change into our models fairly soon, if we can.

@st--
Copy link
Member

st-- commented Jul 30, 2020

I'd be happy to merge this but unfortunately codecov-project seems to be having issues... :S

@nfergu
Copy link
Contributor Author

nfergu commented Aug 3, 2020

@st-- Thanks. Do you know if there's any way to get codecov to try again? I could try and push an insignificant change this to PR to try and trigger it, but there might be a better way?

@st-- st-- merged commit 20c5ee4 into GPflow:develop Aug 3, 2020
vdutor added a commit that referenced this pull request Aug 27, 2020
* Update pull request template (#1510)

Clarify template to make it easier for contributors to fill in relevant information.

* Temporary workaround for tensorflow_probability dependency issue (#1522)

* pin cloudpickle==1.3.0 as temporary workaround for tensorflow/probability#991 to unblock our build (to be reverted once fixed upstream)

* Update readme with new project using GPflow (#1530)

* fix bug in varying_noise notebook (#1526)

* Fix formatting in docs (intro.md) and restore link removed by #1498 (#1520)

* pin tensorflow<2.3 tensorflow-probability<0.11 (#1537)

* Quadrature Refactoring (#1505)

* WIP: quadrature refactoring

* Removing old ndiagquad code

* deleted test code

* formatting and type-hint

* merge modules

* black formatting

* formatting

* solving failing tests

* fixing failing tests

* fixes

* adapting tests for new syntax, keeping numerical behavior

* black formatting

* remove printf

* changed code for compiled tf compatibility

* black

* restored to original version

* undoing changes

* renaming

* renaming

* renaming

* reshape kwargs

* quadrature along axis=-2, simplified broadcasting

* black

* docs

* docs

* helper function

* docstrings and typing

* added new and old quadrature equivalence tests

* black

* Removing comments

Co-authored-by: Vincent Dutordoir <dutordoirv@gmail.com>

* Typo

Co-authored-by: Vincent Dutordoir <dutordoirv@gmail.com>

* notation

Co-authored-by: Vincent Dutordoir <dutordoirv@gmail.com>

* reshape_Z_dZ return docstring fix

* FIX: quad_old computed with the ndiagquad_old

Co-authored-by: Vincent Dutordoir <dutordoirv@gmail.com>

* more readable implementation

Co-authored-by: Vincent Dutordoir <dutordoirv@gmail.com>

* tf.ensure_shape added

* removed ndiagquad

* removed ndiagquad

* Revert "removed ndiagquad"

This reverts commit 7bb0e9f.

* FIX: shape checking of dZ

* Revert "removed ndiagquad"

This reverts commit 8e23524.

Co-authored-by: Gustavo Carvalho <gustavo.carvalho@delfosim.com>
Co-authored-by: ST John <st@prowler.io>
Co-authored-by: Vincent Dutordoir <dutordoirv@gmail.com>

* Add base_conditional_with_lm function (#1528)

* Added base_conditional_with_lm function, which accepts Lm instead of Kmm

Co-authored-by: Neil Ferguson <neil@prowler.io>
Co-authored-by: Vincent Dutordoir <dutordoirv@gmail.com>
Co-authored-by: st-- <st--@users.noreply.github.com>

* Fixed separate_independent_conditional to correctly handle q_sqrt=None. (#1533)

* Fixed separate_independent_conditional to correctly handle q_sqrt=None.

Co-authored-by: Aidan Scannell <scannell.aidan@gmail.com>
Co-authored-by: st-- <st--@users.noreply.github.com>

* Bump version numbers to 2.1.0. (#1544)

* Re-introduce pytest-xdist (#1541)

Enables pytest-xdist for locally running tests (`make test`) on multiple cores in parallel.

* check dependency versions are valid on CI (#1536)

* Update to not use custom image (#1545)

* Update to not use custom image

* Add test requirements

* Update parameter to be savable (#1518)

* Fix for quadrature failure mode when autograph was set to False (#1548)

* Fix and test

* Change shape of quadrature tensors for better broadcasting (#1542)

* using the first dimension to hold the quadrature summation

* adapting ndiagquad wrapper

* Changed bf for bX in docstrings

Co-authored-by: Gustavo Carvalho <gustavo.carvalho@delfosim.com>
Co-authored-by: st-- <st--@users.noreply.github.com>
Co-authored-by: Vincent Dutordoir <dutordoirv@gmail.com>

* Update min TFP supported version to 0.10 (#1551)

* Broadcasting constant and zero mean function (#1550)

* Broadcasting constant and zero mean function

* Use rank instead of ndim

Co-authored-by: st-- <st--@users.noreply.github.com>
Co-authored-by: joelberkeley-pio <joel.berkeley@prowler.io>
Co-authored-by: gustavocmv <47801305+gustavocmv@users.noreply.github.com>
Co-authored-by: Gustavo Carvalho <gustavo.carvalho@delfosim.com>
Co-authored-by: ST John <st@prowler.io>
Co-authored-by: Neil Ferguson <nfergu@users.noreply.github.com>
Co-authored-by: Neil Ferguson <neil@prowler.io>
Co-authored-by: Aidan Scannell <as12528@my.bristol.ac.uk>
Co-authored-by: Aidan Scannell <scannell.aidan@gmail.com>
Co-authored-by: Sandeep Tailor <s.tailor@insysion.net>
Co-authored-by: Artem Artemev <art.art.v@gmail.com>
@st-- st-- mentioned this pull request Sep 14, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants