-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
GP bug for noise_learn = false underprediction of uncertainty #143
Conversation
Codecov Report
@@ Coverage Diff @@
## master #143 +/- ##
==========================================
+ Coverage 88.15% 88.71% +0.55%
==========================================
Files 4 4
Lines 380 381 +1
==========================================
+ Hits 335 338 +3
+ Misses 45 43 -2
Continue to review full report at Codecov.
|
bors try |
tryBuild succeeded: |
Looks good with SKLJL. |
The predicted covariance (when returned with
So in theory the does it return something similar with I did try to do something in the unit tests like this - (see the added file) |
Ignore what i said: from the GaussianProcess.jl source code I found this: function predict_y(gp::GPE, x::AbstractMatrix; full_cov::Bool=false)
μ, σ2 = predict_f(gp, x; full_cov=full_cov)
if full_cov
npred = size(x, 2)
return μ, σ2 + ScalMat(npred, exp(2*gp.logNoise))
else
return μ, σ2 .+ exp(2*gp.logNoise)
end
end Thanks! I will update accordingly |
Ok great! I forgot to say in my previous comment I was referring to the Lorenz example |
OK
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, I get virtually the same results for both GPJL and SKLJL now, thanks!
bors r+ |
Build succeeded: |
Purpose
Fix the
![](https://user-images.githubusercontent.com/47412152/170802229-877ba200-78c1-40bd-b5fd-a07e8629b4eb.png)
![](https://user-images.githubusercontent.com/47412152/170802234-6bb83e79-5738-4b2e-b72e-004e10163b84.png)
noise_learn = false
bug greatly underpredicting uncertainty. In Lorenz example, the following two should be similar. withnoise_learn=false
andtrue
:co-author with @lm2612
In this PR
alg_reg_noise
optional argument to GP to set the regularization whennoise_learn=true
(removingmagic_number
)SKLJL()
option