Skip to content

Commit

Permalink
Added the Fit benchmarking class (#1212)
Browse files Browse the repository at this point in the history
* fixed the table docs

* updated the fitbenchmarking API in main

* added the Fit Class

* updated pylint config

* updated the tests for minimizers

* Updated the tests for main

* temp commit

* added return values to methods in fit class

* added more tests for the Fit class

* updated formating

* Updated tests

* deleted tests

* Updated docs

* updated tests

* removed the old functions

* Updated the patching for minimizer tests

* updating patching for hessian tests

* renamed file

* updated perform_fit method

* added more tests for benchmarking method to get full coverage

* refactored benchmarking and starting value tests

* refactored tests

* deleted starting value test file

* Updated the Software tests

* Updated loop_over_minimizers tests

* updated the jacobian tests

* added tests for the loop over hessian class

* renamed test file

* updated gitignore file

* added some more tests for the __perform_fit method

* Apply suggestions from code review

Co-authored-by: Letizia97 <letizia.protopapa@stfc.ac.uk>

* Added more tests

Co-authored-by: Letizia97 <letizia.protopapa@stfc.ac.uk>

* modified tests to work with os.path.join

* Updated files with linting fixes

* temp commit

* temp commit

* temp commit

* temp commit

* temp commit

* temp commit

* temp commit

* temp commit

* temp commit

* temp commit

* temp commit

---------

Co-authored-by: Letizia97 <letizia.protopapa@stfc.ac.uk>
  • Loading branch information
RabiyaF and Letizia97 committed Apr 9, 2024
1 parent e28d26d commit 10d5107
Show file tree
Hide file tree
Showing 16 changed files with 3,357 additions and 1,955 deletions.
1 change: 1 addition & 0 deletions .gitignore
Expand Up @@ -14,3 +14,4 @@ docs/source/contributors/module_index
.vscode/
.eggs/
*.out
powermetrics_log.txt
12 changes: 6 additions & 6 deletions fitbenchmarking/cli/main.py
Expand Up @@ -15,7 +15,7 @@
import fitbenchmarking
from fitbenchmarking.cli.checkpoint_handler import generate_report
from fitbenchmarking.cli.exception_handler import exception_handler
from fitbenchmarking.core.fitting_benchmarking import benchmark
from fitbenchmarking.core.fitting_benchmarking import Fit
from fitbenchmarking.core.results_output import (create_index_page,
open_browser, save_results)
from fitbenchmarking.utils.checkpoint import Checkpoint
Expand Down Expand Up @@ -293,11 +293,11 @@ def run(problem_sets, additional_options=None, options_file='', debug=False):

LOGGER.info('Running the benchmarking on the %s problem set',
label)
results, failed_problems, unselected_minimizers = \
benchmark(options=options,
data_dir=data_dir,
label=label,
checkpointer=cp)
fit = Fit(options=options,
data_dir=data_dir,
label=label,
checkpointer=cp)
results, failed_problems, unselected_minimizers = fit.benchmark()

# If a result has error flag 4 then the result contains dummy values,
# if this is the case for all results then output should not be
Expand Down
4 changes: 2 additions & 2 deletions fitbenchmarking/cli/tests/test_main.py
Expand Up @@ -57,7 +57,7 @@ class TestMain(TestCase):
Tests for main.py
"""

@patch('fitbenchmarking.cli.main.benchmark')
@patch('fitbenchmarking.cli.main.Fit.benchmark')
def test_check_no_results_produced(self, benchmark):
"""
Checks that exception is raised if no results are produced
Expand All @@ -68,7 +68,7 @@ def test_check_no_results_produced(self, benchmark):
main.run(['examples/benchmark_problems/simple_tests'],
os.path.dirname(__file__), debug=True)

@patch('fitbenchmarking.cli.main.benchmark')
@patch('fitbenchmarking.cli.main.Fit.benchmark')
def test_all_dummy_results_produced(self, benchmark):
"""
Checks that exception is raised if all dummy results
Expand Down

0 comments on commit 10d5107

Please sign in to comment.