You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am studying about gaussian process in LCB (Lower Confidence Bounds) evaluation. I take bo_branin.cpp as an example. I found variance is constantly lower than 1, which could be much smaller than mean value even in the early training step (for example, a gaussian model with only 2 samples trained. I tried my own example with mean value at around 170, but 0.2 variance with only 2 samples trained, where (170+-0.2) cannot contain the real value).
Through studying gaussian_process.cpp, I found that mSigma hugely decides the value of covariance. mSigma is set or gotten from function setHyperParameters and getHyperParameters in kernelRegressor.hpp, and is updated through kOptimizer in posterior_empirical.cpp. An initial 2-dimension points, mean and variance is combined into a 4-dimension vector to insert to kOptimizer. However, I doubt mean and variance is not calculated correctly as they are probably treated as same as the coordinates of 2-dimension points in kernelRegressor.hpp.
Could you help check about the calculation of variance? I feel interested into studying more.
Thanks,
Xianan
The text was updated successfully, but these errors were encountered:
The variance is computed correctly. For certain models (not in all of them), the variance is estimated as a hyperparameter of the GP, like the kernel hyperparameters.
Dear Authors,
I am studying about gaussian process in LCB (Lower Confidence Bounds) evaluation. I take bo_branin.cpp as an example. I found variance is constantly lower than 1, which could be much smaller than mean value even in the early training step (for example, a gaussian model with only 2 samples trained. I tried my own example with mean value at around 170, but 0.2 variance with only 2 samples trained, where (170+-0.2) cannot contain the real value).
Through studying gaussian_process.cpp, I found that mSigma hugely decides the value of covariance. mSigma is set or gotten from function setHyperParameters and getHyperParameters in kernelRegressor.hpp, and is updated through kOptimizer in posterior_empirical.cpp. An initial 2-dimension points, mean and variance is combined into a 4-dimension vector to insert to kOptimizer. However, I doubt mean and variance is not calculated correctly as they are probably treated as same as the coordinates of 2-dimension points in kernelRegressor.hpp.
Could you help check about the calculation of variance? I feel interested into studying more.
Thanks,
Xianan
The text was updated successfully, but these errors were encountered: