Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

L-SHADE implementation #390

Merged
merged 4 commits into from
Aug 29, 2022
Merged

L-SHADE implementation #390

merged 4 commits into from
Aug 29, 2022

Conversation

AlesGartner
Copy link
Contributor

Summary

Added Implementation Success-history based adaptive differential evolution algorithm with Linear population size reduction in shade.py. Also added test (test_shade.py) and example (run_shade) files and updated algorithms/modified/innit.py .

The algorithm is explained this article.

I created a Individual class SolutionSHADE for the algorithm. SolutionSHADE additionally stores f and cr values of each Individual.

The SuccessHistoryAdaptiveDifferentialEvolution class implements the SHADE 1.1 algorithm which is the base for the L-SHADE algorithm implemented in the LpsrSuccessHistoryAdaptiveDifferentialEvolution class.

The algorithm uses the current-to-pbest mutation strategy with binomial crossover which is implemented in the cross_curr2pbest1 function. Trial vectors created by this function are repaired with the parent_medium function.

@firefly-cpp
Copy link
Contributor

Thank you, @AlesGartner for all the hard work! I want to ask @zStupan for an initial review of this implementation.

zStupan
zStupan previously approved these changes Aug 9, 2022
Copy link
Contributor

@zStupan zStupan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! We can merge if the tests pass and the algorithms produce similar results to the ones in the paper.

Re-styled shade.py and test_shade.py and updated util/factory.py and Algorithms.md
zStupan
zStupan previously approved these changes Aug 10, 2022
@firefly-cpp
Copy link
Contributor

@GregaVrbancic, can you please check the following issue regarding the flake settings?

from flake8.options.config import ConfigFileFinder
ImportError: cannot import name 'ConfigFileFinder' from 'flake8.options.config' (/home/runner/work/NiaPy/NiaPy/.venv/lib/python3.8/site-packages/flake8/options/config.py)

@AlesGartner, are the results achieved by your implementation similar to those provided by the original implementation? Can you please also compare your results with the original publication?

Changed where population sort occurs and changed x_pbest generation in cross_curr2pbest1 + misc changes
@AlesGartner
Copy link
Contributor Author

@GregaVrbancic, can you please check the following issue regarding the flake settings?

from flake8.options.config import ConfigFileFinder ImportError: cannot import name 'ConfigFileFinder' from 'flake8.options.config' (/home/runner/work/NiaPy/NiaPy/.venv/lib/python3.8/site-packages/flake8/options/config.py)

@AlesGartner, are the results achieved by your implementation similar to those provided by the original implementation? Can you please also compare your results with the original publication?

I did some tests on cec2014 functions using the NiaPy-examples repo.

I ran my L-SHADE implementation with default parameters (which i assume run_cec.py uses)of population_size= 180, extern_arc_rate= 2.6, pbest_factor=0.11 and hist_mem_size=6 which i believe where used in the publication

i did 51 runs on test functions 2, 5, 14, 19 and 27 of dimensionality D=10 and got the following results:
(numbers shown are based of the error value between the best fitness values found in each run and the
true optimal value)

F. num. Best Worst Median Mean Std.
2 0.00e+00 0.00e+00 0.00e+00 0.00e+00 0.00e+00
5 2.34e-01 2.0e+01 2.0e+01 1.35e+01 8.83e+00
14 1.48e-02 1.48e-01 5.05e-02 5.39e-02 2.02e-02
19 5.30e-02 1.04e+00 2.13e-01 2.30e-01 1.39e-01
27 1.06e+00 4.00e+02 1.73e+00 1.01e+02 1.49e+02

Results in the publication:

F. num. Best Worst Median Mean Std.
2 0.0e+00 0.0e+00 0.0e+00 0.0e+00 0.0e+00
5 1.5e-01 2.0e+01 2.0e+01 1.4e+01 8.8e+00
14 4.5e-02 1.6e-01 7.6e-02 8.1e-02 2.6e-02
19 1.3e-02 3.8e-01 6.2e-02 7.7e-02 6.4e-02
27 8.5e-01 4.0e+02 1.5e+00 5.8e+01 1.3e+02

@firefly-cpp
Copy link
Contributor

Thanks @AlesGartner!

It looks consistent. Can you please also copy one or two results tables for the dimensions higher than 10, e.g., 20, 30, or 50?

When #391 is ready, we can merge this PR.

@AlesGartner
Copy link
Contributor Author

I ran tests on D=30 and D=50, but these took really long to finish, so i only did 10 runs per problem function.

D=30

these tests where ran with population_size = 540 (18 * D)

F. num. Best Worst Median Mean Std.
5 2.01e+01 2.02e+01 2.01e+01 2.01e+01 2.29e-02
9 6.92e+00 1.12e+01 9.10e+00 9.03e+00 1.23e+00
14 1.56e-01 4.32e-01 1.95e-01 2.16e-01 7.65e-02
19 3.19e+00 4.88e+00 4.36e+00 4.21e+00 5.56e-01

Results in the publication(based on 51 runs):

F. num. Best Worst Median Mean Std.
5 2.0e+01 2.0e+01 2.0e+01 2.0e+01 3.7e-02
9 3.3e+00 9.2e+00 7.1e+00 6.8e+00 1.5e+00
14 1.8e-01 3.0e-01 2.4e-01 2.4e-01 3.0e-02
19 1.6e+00 4.9e+00 3.9e+00 3.7e+00 6.8e-01

D=50

these tests where ran with population_size = 900 (18 * D)

F. num. Best Worst Median Mean Std.
5 2.02e+01 2.03e+01 2.03e+01 2.03e+01 2.19e-02
9 1.06e+01 1.78e+01 1.37e+01 1.37e+01 2.42e+00
14 2.02e-01 6.54e-01 2.50e-01 3.02e-01 1.34e-01
19 6.73e+00 1.23e+01 7.69e+00 8.90e+00 2.17e+00

Results in the publication(based on 51 runs):

F. num. Best Worst Median Mean Std.
5 2.0e+01 2.0e+01 2.0e+01 2.0e+01 4.6e-02
9 5.4e+00 1.5e+01 1.1e+01 1.1e+01 2.1e+00
14 2.4e-01 3.5e-01 2.9e-01 3.0e-01 2.5e-02
19 5.4e+00 1.2e+01 7.9e+00 8.3e+00 1.8e+00

fixed a mistake in the evolution function + small changes
@firefly-cpp firefly-cpp merged commit b0cad88 into NiaOrg:master Aug 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants