You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the papers to be released on the GCM use of CES we use an SVD decomposition to transform the space in which we learn with GP. I suggest we implement this here.
Primarily this allows the learning of noise to be exact and reguralization to be precise, as we transform into a space where the observational noise is not only diagonal but we can normalize it to be exactly the identity. Then the built in alpha or observational_noise parameters in the GP are more naturally chosen.
This will involve firstly an SVD implementation within the GPEmulator class, and secondly a modified MCMC class to work in the transformed variables.
The text was updated successfully, but these errors were encountered:
63: WIP: add SVD functionality r=odunbar a=odunbar
Resolves#57 . We add the ability for the Gaussian process to learn non - diagonal covariance matrix for the observational noise by applying and SVD.
Resolve#58 We add functionality to toggle whether we wish to learn the observational noise. and add (mathematically correct) default values when we are not.
We also have 2 examples:
- [x] Gaussian process plot. We train a GP on some training points in 2D space, and plot the mean and variance (compared with the underlying model and observational noise) - in the untransformed space
- [x] Noise learning test. We train a GP with known noise, with `learn_noise = false` and with `learn_noise = true` we compare the learned `WhiteKernel` parameters to the true unlearned parameters.
Coauthored with @bielim
Co-authored-by: odunbar <odunbar@caltech.edu>
Co-authored-by: Melanie Bieli <melanie.bieli@bluewin.ch>
Co-authored-by: bielim <bielim@users.noreply.github.com>
Co-authored-by: odunbar <47412152+odunbar@users.noreply.github.com>
In the papers to be released on the GCM use of CES we use an SVD decomposition to transform the space in which we learn with GP. I suggest we implement this here.
Primarily this allows the learning of noise to be exact and reguralization to be precise, as we transform into a space where the observational noise is not only diagonal but we can normalize it to be exactly the identity. Then the built in
alpha
orobservational_noise
parameters in the GP are more naturally chosen.This will involve firstly an SVD implementation within the
GPEmulator
class, and secondly a modifiedMCMC
class to work in the transformed variables.The text was updated successfully, but these errors were encountered: