Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some problems about MLE optimization #365

Closed
NWPU-XuChenzhou opened this issue Jun 4, 2022 · 3 comments
Closed

Some problems about MLE optimization #365

NWPU-XuChenzhou opened this issue Jun 4, 2022 · 3 comments
Labels

Comments

@NWPU-XuChenzhou
Copy link

Hi,
I have two questions about hyperparamter tunning of KRG:

  1. What is the difference between reduced_likelihood_function in krg_based.py and the basic formulas of MLE:
    $$\ln L\left( \boldsymbol{\theta} \right)=-\frac{n}{2}\ln \left( \sigma^{2}\right)-\frac{1}{2}\ln \left( \left| \boldsymbol{R} \right|\right)$$
  2. When I use COBYLA to minimize reduced_likelihood_function, why the convergency history looks so strange?
    image
    I'm looking forward to receiving your answer. Thank you!
@relf
Copy link
Member

relf commented Jun 9, 2022

Hi. Regarding 1., basically reduced_likelihood_function implements MLE formula as stated in the reference paper (eq. 8) which could give you some insights about the code.

@Paul-Saves
Copy link
Contributor

Paul-Saves commented Jun 9, 2022

Hi. Regarding 2., we are using a multistart option referred as n_start in the documentation https://smt.readthedocs.io/en/latest/_src_docs/surrogate_models/krg.html.

Default is n_start =10 so you have 10 peaks.

@NWPU-XuChenzhou
Copy link
Author

Okay, I've got it. Thank you very much!

@relf relf added the question label Jun 9, 2022
@relf relf closed this as completed Jun 9, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants