different results vs. runs #337
Comments
analysis: the problem is in initialization of numpy random number generators. TODO: 1. APIs should be updated to the latest (details to be provided); 2. seed should be passed for testing purposes to achieve consistency. NB: do NOT close this issue, I am going to work on it soon. |
I have the same problem as well. Even I used the same seed, the results were different. |
@Seal-o-O that's weird, if the seed is the same the output should be the same too. We use this fact extensively in functional test suite. How are you fixing the seed? |
@apaleyes Thanks for taking your time to answer me.
And as you can see the input is 16 dims, I don't know why it always prefer to exploring the bound of the input ( such like: [6.27999, 0, 0, 6.27999, 0, 0, 0, 0, 6.27999, 0, 0, 6.27999, 0, 2.20965747, 0, 6.27999]). I suppose that it's because of the L-BGFS. The optimizer think it has found the optimum (actually not), so it converge and always search the limitation of the bounds. |
The code looks reasonable. But an important question is, where is |
I imported it at the front by using |
I see. Is there any stochasticity at all in btw, |
No,
|
venv/lib/python3.7/site-packages/GPy/core/init.py
np.random.normal, line 30, should be updated to numpy.random.default_rng(seed=SEED).normal with SEED passed (if SEED=None, purely random; otherwise it will give reproducible results for testing purposes) also it says: Handle priors, this needs to becleaned up at some point#=========================================================================== it is time to clean it up :) |
I tried and it seemed not work. RMSE: |
Thanks, @pavel-rev , great find! I've just realized that in our testing suite we mock the model, while this code obviously uses GPy, so the dependency is very likely the reason for unconsistent behavior. But is it this line or not I am still not sure. I mean, wouldn't fixing numpy random seed affect all generators, including Despite what GitHub org says, GPyOpt is no longer maintained by the uni of Sheffield. Some of people involved are Sheffield alumni though. As for GPy, I don't know who is taking care of it at the moment. @zhenwendai may have a better idea. |
@apaleyes I had this change outside of VCS, now it has gone with reinstall of GPyOpt during working on other issues - I will try to reproduce it with writing some standalone test and will share it here. |
I have the same issue. The randomness is fixed like this:
Even if the evaluated function contains some randomness the seed should fix that too? I also tried restarting my jupyter kernel inbetween runs to see if there might be cached values but that doesn't seem to be the case since I still got different results. Here are three results I got by first restarting my jupyter kernel and then running the optimization:
|
I run optimization and get different results. Sometimes it hits the optimum (I have independent "slow" exhaustive search to know the optimum). Sometimes it does not hit it. Are there known rules of thumb wrt parameters to get more consistent results?
The text was updated successfully, but these errors were encountered: