Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add scalar benchmark functions #271

Closed
wants to merge 45 commits into from
Closed

Conversation

luward
Copy link

@luward luward commented Jan 16, 2022

We introduce a benchmark set for estimagic consisting of 283 problems for 78 functions. The functions are based on a collection of optimization problem functions by Axel Thevenot. We verified function maths, aswell as the minima using online resources on optimization test problems by the Simon Fraser University and the 2013 paper "A Literature Survey of Benchmark Functions for Global Optimization Problems" by Jamil & Yang . We wrote parameterized unit tests, similarly to those for the "more_wild" problem set, to cover our newly added functions and problems. A detailed description of our work is contained in this repository.

@codecov
Copy link

codecov bot commented Jan 16, 2022

Codecov Report

Merging #271 (10e8bba) into main (fc575ea) will decrease coverage by 0.57%.
The diff coverage is 95.23%.

@@            Coverage Diff             @@
##             main     #271      +/-   ##
==========================================
- Coverage   93.61%   93.04%   -0.58%     
==========================================
  Files         191      193       +2     
  Lines       15357    15705     +348     
==========================================
+ Hits        14376    14612     +236     
- Misses        981     1093     +112     
Impacted Files Coverage Δ
src/estimagic/benchmarking/more_wild.py 100.00% <ø> (ø)
src/estimagic/benchmarking/scalar_functions.py 100.00% <ø> (ø)
...c/estimagic/benchmarking/get_benchmark_problems.py 94.83% <88.23%> (+0.59%) ⬆️
tests/benchmarking/test_get_benchmark_problems.py 100.00% <100.00%> (ø)
tests/benchmarking/test_scalar_functions.py 100.00% <100.00%> (ø)
tests/optimization/test_tao_optimizers.py 22.68% <0.00%> (-69.08%) ⬇️
src/estimagic/optimization/tao_optimizers.py 25.00% <0.00%> (-45.66%) ⬇️
...ptimization/subsolvers/bounded_newton_quadratic.py 85.08% <0.00%> (-0.88%) ⬇️
...magic/optimization/subsolvers/_trsbox_quadratic.py 84.08% <0.00%> (-0.82%) ⬇️
... and 1 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fc575ea...10e8bba. Read the comment docs.

luward and others added 27 commits February 14, 2022 18:29
…and it's tests for the new set problem set.
…oblem set. Fixed quartic and xinsheyang problems.
…eEconomics/estimagic into add-scalar-benchmark-functions
@luward
Copy link
Author

luward commented Mar 9, 2022

We introduce a benchmark set for estimagic consisting of 283 problems for 78 functions. The functions are based on a collection of optimization problem functions by Axel Thevenot. We verified function maths, aswell as the minima using online resources on optimization test problems by the Simon Fraser University and the 2013 paper "A Literature Survey of Benchmark Functions for Global Optimization Problems" by Jamil & Yang . We wrote parameterized unit tests, similarly to those for the "more_wild" problem set, to cover our newly added functions and problems. A detailed description of our work is contained in this repository pull request.

@janosg
Copy link
Member

janosg commented Jul 10, 2024

I will close this for now because it is incompatible with some of the planned changes in benchmarking (see #495). I kept the files so we can easily re-add this functionality later.

@janosg janosg closed this Jul 10, 2024
@janosg janosg deleted the add-scalar-benchmark-functions branch August 10, 2024 08:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants