You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I have two questions about hyperparamter tunning of KRG:
What is the difference between reduced_likelihood_function in krg_based.py and the basic formulas of MLE: $$\ln L\left( \boldsymbol{\theta} \right)=-\frac{n}{2}\ln \left( \sigma^{2}\right)-\frac{1}{2}\ln \left( \left| \boldsymbol{R} \right|\right)$$
When I use COBYLA to minimize reduced_likelihood_function, why the convergency history looks so strange? image
I'm looking forward to receiving your answer. Thank you!
The text was updated successfully, but these errors were encountered:
Hi. Regarding 1., basically reduced_likelihood_function implements MLE formula as stated in the reference paper (eq. 8) which could give you some insights about the code.
Hi,
I have two questions about hyperparamter tunning of KRG:
image
I'm looking forward to receiving your answer. Thank you!
The text was updated successfully, but these errors were encountered: