-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tips to improve the results #127
Comments
Thanks very much for your tips. We are checking your code, in order to help improve it. |
I just run your codes and get the results: 100%|██████████| 300/300 [01:02<00:00, 4.77it/s, best_result_so_far=0.158]
The results of multi-start optimization and FMAES are 0.1583149714373654 and 0.155263, respectively. |
Hmm... Strange. Could you send me numpy, scipy, pypop, numba versions. @Chang-SHAO |
numpy==1.25.2 |
python==3.11.4 |
@Chang-SHAO
And have got the same results if I use numpy generator, which might be more stable, I have got 0.1534:
|
@FiksII According to the official suggestion of NumPy, it is better to use its generator: https://numpy.org/doc/stable/reference/random/generator.html |
@Evolutionary-Intelligence The problem is I can't achieve FMAES 0.155:
|
For PyPop7, we always use this generator way to control the random number generation, if possible. |
For BFGS, I obtained 0.148 on my personal computer:
|
@Evolutionary-Intelligence I am completely confused |
For FMAES, I got 0.152 on my personal computer using the following settings:
I only added the 'stagnation' setting and increased 'sigma' from 0.3 to 1.3 for better exploration.
|
I am trying to solve the optimization problem, I have selected 12 algorithms [VKDCMA, VDCMA, R1ES, RMES, CCMAES2016, FMAES, HCC, LMCMA, LMCMAES, OPOA2015, SAMAES, XNES], run them with 100 different seeds, but I realize that they are not doing well. The most effective one is "multi-start local optimization" (when I many times randomly choose the initial point for a local-search optimization, like BFGS). Maybe I'm doing something wrong?
Below I will only provide a "sandbox" that only runs FMAES. Sorry for numba (but, it's 10 times faster than numpy) and tdqm.
Output:
So, the result of multi-start optimization is 0.161725642456162, at the same time FMAES gives 0.172185 after 100 mlns attempts. Moreover, the last 94 mlns attempts did not improve the result. What can I do??? Changing sigma and individuals did not allow me to get results better than 0.1617.
The text was updated successfully, but these errors were encountered: