Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

205 improve documentation on entry points #225

Merged
merged 13 commits into from Oct 10, 2019

Conversation

AndrewLister-STFC
Copy link
Contributor

Description of Work

Fixes #205
Refactored do_benchmarking and edited docstrings to explain the code in a more understandable way.

Also contains commits for:

  • Indentaion fixes
  • Fix for broken mantid example
  • Fix for expert example (I removed some pass through functions in a previous commit that were used by expert. I have put these back in and marked them as API functions.)

Testing Instructions

  1. Check everything works as before
  2. Read docstrings and see if it makes sense.

@AndrewLister-STFC
Copy link
Contributor Author

Marked as in progress as @Anders-Markvardsen also mentions fitbenchmark_one_problem.py in the issue. But this looks up to date?

@AndrewLister-STFC

This comment has been minimized.

@Anders-Markvardsen
Copy link
Contributor

Note

In [12]: run example_runScripts.py /../fitbenchmarking/fitbenchmarking_default_options.json

Running the benchmarking on the NIST_low_difficulty problem set


Producing output for the NIST_low_difficulty problem set

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
C:\Midlertiddig\fitbenchmarking\example_scripts\example_runScripts.py in <module>()
    103     printTables(software_options, results,
    104                 group_name=label, use_errors=use_errors,
--> 105                 color_scale=color_scale, results_dir=results_dir)
    106
    107     print('\nCompleted benchmarking for {} problem set\n'.format(sub_dir))

C:\Midlertiddig\fitbenchmarking\fitbenchmarking\results_output.py in save_results_tables(software_options, results_per_test, group_name, use_errors, color_scale, results_dir)
     56     tables_dir = create_dirs.restables_dir(results_dir, group_name)
     57     linked_problems = \
---> 58         visual_pages.create_linked_probs(results_per_test, group_name, results_dir)
     59
     60     acc_rankings, runtimes, _, _ = generate_tables(results_per_test, minimizers)

C:\Midlertiddig\fitbenchmarking\fitbenchmarking\resproc\visual_pages.py in create_linked_probs(results_per_test, group_name, results_dir)
     35     linked_problems = []
     36     for test_idx, prob_results in enumerate(results_per_test):
---> 37         name = results_per_test[test_idx][0].problem.name
     38         if name == prev_name:
     39             count += 1

AttributeError: 'list' object has no attribute 'problem'

@Anders-Markvardsen
Copy link
Contributor

Thanks. As discussed removed for now not relevant block feature

@AndrewLister-STFC
Copy link
Contributor Author

AndrewLister-STFC commented Sep 13, 2019

For the first issue I would expect it won't matter once the blocks are removed anyway (it'll change the code where the issue is raised.) but I'm not sure what could be causing the issue. It is working on my machine and Michaels when we've tried it.

Thanks. As discussed removed for now not relevant block feature

I have just double checked the blocks on my laptop and it is also not working there so I'll remove it on monday.

@Anders-Markvardsen Anders-Markvardsen self-assigned this Sep 20, 2019
@Anders-Markvardsen
Copy link
Contributor

Running example_runScripts.py from anywhere remains broken, including for the location where a user most likely would like it from

In [11]: run example_runScripts.py

Running the benchmarking on the NIST_low_difficulty problem set

---------------------------------------------------------------------------
IOError                                   Traceback (most recent call last)
C:\Midlertiddig\fitbenchmarking\example_scripts\example_runScripts.py in <module>()
     96                            data_dir=data_dir,
     97                            use_errors=use_errors,
---> 98                            results_dir=results_dir)
     99
    100     print('\nProducing output for the {} problem set\n'.format(label))

C:\Midlertiddig\fitbenchmarking\fitbenchmarking\fitting_benchmarking.pyc in fitbenchmark_group(group_name, software_options, data_dir, use_errors, results_dir)
     36     logger.info("Loading minimizers from {0}".format(
     37         software_options['software']))
---> 38     minimizers, software = misc.get_minimizers(software_options)
     39
     40     # create list with blocks of paths to all problem definitions in data_dir

fitbenchmarking\utils\misc.py in get_minimizers(software_options)

fitbenchmarking\utils\options.pyc in get_option(options_file, option)

IOError: [Errno 2] No such file or directory: 'fitbenchmarking/fitbenchmarking_default_options.json'

This change degrades the user experience of FitBenchmarking.

@Anders-Markvardsen
Copy link
Contributor

As a side when I run from the fitbenchmarking run directory: python setup.py install I get

copying sas\sascalc\pr\fit\expression.py -> build\lib\sas\sascalc\pr\fit
copying sas\sascalc\pr\fit\Loader.py -> build\lib\sas\sascalc\pr\fit
copying sas\sascalc\pr\fit\__init__.py -> build\lib\sas\sascalc\pr\fit
creating build\lib\benchmark_problems\NIST
creating build\lib\benchmark_problems\NIST\low_difficulty
copying benchmark_problems\NIST\low_difficulty\Chwirut1.dat -> build\lib\benchmark_problems\NIST\low_difficulty
copying benchmark_problems\NIST\low_difficulty\Chwirut2.dat -> build\lib\benchmark_problems\NIST\low_difficulty
copying benchmark_problems\NIST\low_difficulty\DanWood.dat -> build\lib\benchmark_problems\NIST\low_difficulty
copying benchmark_problems\NIST\low_difficulty\Gauss1.dat -> build\lib\benchmark_problems\NIST\low_difficulty
copying benchmark_problems\NIST\low_difficulty\Gauss2.dat -> build\lib\benchmark_problems\NIST\low_difficulty
copying benchmark_problems\NIST\low_difficulty\Lanczos3.dat -> build\lib\benchmark_problems\NIST\low_difficulty
copying benchmark_problems\NIST\low_difficulty\Misra1a.dat -> build\lib\benchmark_problems\NIST\low_difficulty
copying benchmark_problems\NIST\low_difficulty\Misra1b.dat -> build\lib\benchmark_problems\NIST\low_difficulty
creating build\lib\benchmark_problems\Neutron
error: can't copy 'benchmark_problems\Neutron\data_files': doesn't exist or not a regular file

which may not be related to this issue, but is a bug introduced into the install? Could you investigate where this bug originates from?

@tyronerees
Copy link
Member

@Anders-Markvardsen -- @AndrewLister-STFC is looking at this in #240. The fixes in #237 don't seem to work across all platforms, but as Travis doesn't run the example script at the moment this wasn't caught by the CI.

@AndrewLister-STFC
Copy link
Contributor Author

AndrewLister-STFC commented Sep 23, 2019

Running example_runScripts.py from anywhere remains broken, including for the location where a user most likely would like it from
In [11]: run example_runScripts.py

Running the benchmarking on the NIST_low_difficulty problem set


IOError Traceback (most recent call last)
C:\Midlertiddig\fitbenchmarking\example_scripts\example_runScripts.py in ()
96 data_dir=data_dir,
97 use_errors=use_errors,
---> 98 results_dir=results_dir)
99
100 print('\nProducing output for the {} problem set\n'.format(label))

C:\Midlertiddig\fitbenchmarking\fitbenchmarking\fitting_benchmarking.pyc in fitbenchmark_group(group_name, software_options, data_dir, use_errors, results_dir)
36 logger.info("Loading minimizers from {0}".format(
37 software_options['software']))
---> 38 minimizers, software = misc.get_minimizers(software_options)
39
40 # create list with blocks of paths to all problem definitions in data_dir

fitbenchmarking\utils\misc.py in get_minimizers(software_options)

fitbenchmarking\utils\options.pyc in get_option(options_file, option)

IOError: [Errno 2] No such file or directory: 'fitbenchmarking/fitbenchmarking_default_options.json'

This change degrades the user experience of FitBenchmarking.

The issue there is that you haven't specified an options file, and the default points to 'fitbenchmarking/fitbenchmarking_default_options.json', but I can change this to '../fitbenchmarking/fitbenchmarking_default_options.json' so it works from the example scripts directory instead. Originally it was using the location of the file and a relative path from there, which could be problematic also if things are installed in different places.

Perhaps the best way to solve this is to have the default as the current dir (e.g. './fitbenchmarking_default_options.json'), and move the options file there. Either way, this was a change introduced in previous PR which was reviewed by @wathen, so a change to this should be a seperate issue.

To get past the fact that you have been unable to run it you should pass the options file in as an arg:
python example_runScript.py /../fitbenchmarking_default_options.json

AndrewLister-STFC and others added 2 commits October 10, 2019 09:41
Co-Authored-By: Anders Markvardsen <anders.markvardsen@stfc.ac.uk>
Copy link
Contributor

@Anders-Markvardsen Anders-Markvardsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Manually testing of example scripts also. Reduction of in usability of running these is being work on separately.

Documentation improved and as a bonus nuke all references to out of date 'block' code

@Anders-Markvardsen Anders-Markvardsen merged commit ed45ec1 into master Oct 10, 2019
@Anders-Markvardsen Anders-Markvardsen deleted the 205_improve_documentation_on_entry_points branch October 10, 2019 09:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Documentation of code out of sync in top level files
3 participants