Skip to content

Commit

Permalink
Merge 4d41c51 into fc3fda7
Browse files Browse the repository at this point in the history
  • Loading branch information
AtomAnu committed Aug 23, 2019
2 parents fc3fda7 + 4d41c51 commit 488fa1e
Show file tree
Hide file tree
Showing 318 changed files with 42,692 additions and 63 deletions.
14 changes: 12 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,10 +28,10 @@ For help of how to use the command line/terminal, click the hyperlink correspond

The above step is done to ensure that the compatible version of docutils packages (version 0.12) is installed.

7. Finally, in this terminal, run `example_scripts/example_runScript.py`, located in the fitbenchmarking folder. This example script fit benchmarks Mantid using all the available minimizers. The resulting tables can be found in `example_scripts/results`.
7. Finally, in this terminal, run `example_scripts/example_runScript_mantid.py`, located in the fitbenchmarking folder. This example script fit benchmarks Mantid using all the available minimizers. The resulting tables can be found in `example_scripts/results`.

## FitBenchmarking Scipy
The `example_runScripts.py` file can be changed such that it benchmarks minimizers supported by scipy instead of mantid (details provided in the file itself).
The `example_runScripts.py` file is designed to benchmark minimizers supported by software/libraries that provide straightforward cross-platform Python install; as of know this mean SciPy (more details provided in the file itself).

For this to work scipy version 0.17 or higher is needed (which includes needed [curve_fit](https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.curve_fit.html) support). **The Linux distributions we have tested against so far have all included scipy 0.17+ (0.17 is from Feb 2016).**

Expand All @@ -54,6 +54,16 @@ Mantid on Windows is shipped with Python. The above steps can also be done from
terminal, in which case please ensure that you are upgrading against Python
installed with Mantid, which by default is located in `C:\MantidInstall\bin`.

## FitBenchmarking SasView
The `example_runScripts_SasView.py` file is designed to benchmark minimizers supported by SasView (Bumps).

In order to do so, Bumps, sasmodels, lxml and sascalc need to be installed. Note that Bumps, sasmodels and lxml can be installed via `pip` commands. However, as of this writing, sascalc is not an independent package yet and, therefore, cannot be installed via `pip`. Thus, sascalc is now included in FitBenchmarking under the folder `fitbenchmarking/sas`.

To install Bumps, sasmodels and lxml, run the following command on console:
1. `python -m pip install bumps`
2. `python -m pip install sasmodels`
3. `python -m pip install lxml`

## Description
The tool creates a table/tables that shows a comparison between the different minimizers available in a fitting software (e.g. scipy or mantid), based on their accuracy and/or runtimes.
An example of a table is:
Expand Down
File renamed without changes.
File renamed without changes.
22 changes: 22 additions & 0 deletions benchmark_problems/SAS_modelling/1D/data_files/cyl_400_20.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
<X> <Y>
0 -1.#IND
0.025 125.852
0.05 53.6662
0.075 26.0733
0.1 11.8935
0.125 4.61714
0.15 1.29983
0.175 0.171347
0.2 0.0417614
0.225 0.172719
0.25 0.247876
0.275 0.20301
0.3 0.104599
0.325 0.0285595
0.35 0.00213344
0.375 0.0137511
0.4 0.0312374
0.425 0.0350328
0.45 0.0243172
0.475 0.00923067
0.5 0.00121297
56 changes: 56 additions & 0 deletions benchmark_problems/SAS_modelling/1D/data_files/cyl_400_40.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
<X> <Y>
0 -1.#IND
0.00925926 1246.59
0.0185185 612.143
0.0277778 361.142
0.037037 211.601
0.0462963 122.127
0.0555556 65.2385
0.0648148 30.8914
0.0740741 12.4737
0.0833333 3.51371
0.0925926 0.721835
0.101852 0.583607
0.111111 1.31084
0.12037 1.9432
0.12963 1.94286
0.138889 1.58912
0.148148 0.987076
0.157407 0.456678
0.166667 0.147595
0.175926 0.027441
0.185185 0.0999575
0.194444 0.198717
0.203704 0.277667
0.212963 0.288172
0.222222 0.220056
0.231481 0.139378
0.240741 0.0541106
0.25 0.0140158
0.259259 0.0132187
0.268519 0.0336301
0.277778 0.0672911
0.287037 0.0788983
0.296296 0.0764438
0.305556 0.0555445
0.314815 0.0280548
0.324074 0.0111798
0.333333 0.00156156
0.342593 0.00830883
0.351852 0.0186266
0.361111 0.0275426
0.37037 0.03192
0.37963 0.0255329
0.388889 0.0175216
0.398148 0.0073075
0.407407 0.0016631
0.416667 0.00224153
0.425926 0.0051335
0.435185 0.0112914
0.444444 0.0138209
0.453704 0.0137453
0.462963 0.0106682
0.472222 0.00532472
0.481481 0.00230646
0.490741 0.000335344
0.5 0.00177224
6 changes: 6 additions & 0 deletions benchmark_problems/SAS_modelling/1D/prob_def_1.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# An example data set for SasView 1D data
name = 'Problem Def 1'
input_file = 'cyl_400_20.txt'
function ='name=cylinder,radius=35.0,length=350.0,background=0.0,scale=1.0,sld=4.0,sld_solvent=1.0'
parameter_ranges = 'radius.range(1,50);length.range(1,500)'
description = ''
6 changes: 6 additions & 0 deletions benchmark_problems/SAS_modelling/1D/prob_def_2.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# An example data set for SasView 1D data
name = 'Problem Def 2'
input_file = 'cyl_400_40.txt'
function ='name=cylinder,radius=35.0,length=350.0,background=0.0,scale=1.0,sld=4.0,sld_solvent=1.0'
parameter_ranges = 'radius.range(1,50);length.range(1,500)'
description = ''
87 changes: 87 additions & 0 deletions example_scripts/SasView_example.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,87 @@
from sasmodels.core import load_model
from sasmodels.bumps_model import Model, Experiment
from sasmodels.data import load_data, empty_data1D, Data1D

from sasmodels.models.broad_peak import Iq

from bumps.names import *
from bumps.fitters import fit
from bumps.formatnum import format_uncertainty

import matplotlib.pyplot as plt

import os

current_path = os.path.realpath(__file__)
dir_path = os.path.dirname(current_path)
main_dir = os.path.dirname(dir_path)
oneD_data_dir = os.path.join(main_dir, 'benchmark_problems', '1D_data', 'data_files', 'cyl_400_20.txt')

test_data = load_data(oneD_data_dir)
test_data.dy = 0.2*test_data.y

# print(type(test_data))

data_1D = Data1D(x=test_data.x, y=test_data.y, dy=test_data.dy)

print(type(data_1D.x))

kernel = load_model('cylinder')

# model_test = load_model('sphere')

# kernel = load_model('broad_peak')
# print(type(test_load))
#We set some errors for demonstration

# x_data = empty_data1D(test_data.x)
# print(x_data.x)
# print(type(x_data))
# print(test_data.qmin)
# print(test_data.y)
# print(type(data_1D))

pars = dict(radius=35,
length=350,
background=0.0,
scale=1.0,
sld=4.0,
sld_solvent=1.0)

model = Model(kernel, **pars)
# print(model.parameters())
# model = Model(kernel)

# SET THE FITTING PARAMETERS
model.radius.range(1, 50)
model.length.range(1, 500)

# M = Experiment(data=data_1D, model=model)
M = Experiment(data=test_data, model=model)

param_initial = M.parameters()
radius_initial = param_initial['radius']

problem = FitProblem(M)

print("Initial chisq", problem.chisq_str())
# problem.plot()
# problem.summarize()
# pylab.show()
# plt.show()
result = fit(problem, method='dream')

# print(M.theory())
#
# print(test_data.y)

print("Final chisq", problem.chisq_str())
for k, v, dv in zip(problem.labels(), result.x, result.dx):
print(k, ":", format_uncertainty(v, dv))


# problem.plot()
# print(model.state())
# print(problem.y)
# plt.show()
# print((M.parameters()))
3 changes: 2 additions & 1 deletion example_scripts/example_runScripts.py
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@
benchmark_probs_dir = os.path.join(fitbenchmarking_folder,
'benchmark_problems')


"""
Modify results_dir to specify where the results of the fit should be saved
If left as None, they will be saved in a "results" folder in the working dir
Expand All @@ -75,7 +76,7 @@
# Do this, in this example file, by selecting sub-folders in benchmark_probs_dir
# "Muon_data" works for mantid minimizers
# problem_sets = ["Neutron_data", "NIST/average_difficulty"]
# problem_sets = ["CUTEst", "Muon_data", "Neutron_data", "NIST/average_difficulty", "NIST/high_difficulty", "NIST/low_difficulty"]
# problem_sets = ["CUTEst", "Muon", "Neutron", "NIST/average_difficulty", "NIST/high_difficulty", "NIST/low_difficulty"]

problem_sets = ["NIST/low_difficulty"]

Expand Down
126 changes: 126 additions & 0 deletions example_scripts/example_runScripts_SasView.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,126 @@


from __future__ import (absolute_import, division, print_function)
import os
import sys

# Avoid reaching the maximum recursion depth by setting recursion limit
# This is useful when running multiple data set benchmarking
# Otherwise recursion limit is reached and the interpreter throws an error
sys.setrecursionlimit(10000)

# Insert path to where the scripts are located, relative to
# the example_scripts folder
current_path = os.path.dirname(os.path.realpath(__file__))
fitbenchmarking_folder = os.path.abspath(os.path.join(current_path, os.pardir))
scripts_folder = os.path.join(fitbenchmarking_folder, 'fitbenchmarking')
sys.path.insert(0, scripts_folder)
sys.path.insert(1, fitbenchmarking_folder)

try:
import bumps
except:
print('******************************************\n'
'Bumps is not yet installed on your computer\n'
'To install, type the following command:\n'
'python -m pip install bumps\n'
'******************************************')
sys.exit()

try:
import sasmodels.data
except:
print('******************************************\n'
'sasmodels is not yet installed on your computer\n'
'To install, type the following command:\n'
'python -m pip install sasmodels\n'
'******************************************')
sys.exit()

try:
import sas
except:
print('******************************************\n'
'sas is not yet installed on your computer\n'
'To install, clone a version of SasView from https://github.com/SasView/sasview\n'
'After that, copy a folder called "sas" inside the sub-folder sasview/src to the fitbenchmarking directory\n'
'******************************************')
sys.exit()

from fitting_benchmarking import do_fitting_benchmark as fitBenchmarking
from results_output import save_results_tables as printTables

# SPECIFY THE SOFTWARE/PACKAGE CONTAINING THE MINIMIZERS YOU WANT TO BENCHMARK
# software = 'mantid'
software = 'sasview'
software_options = {'software': software}

# User defined minimizers
custom_minimizers = {"mantid": ["BFGS", "Simplex"],
"scipy": ["lm", "trf", "dogbox"],
"sasview": ["amoeba"]}
# custom_minimizers = None
# "amoeba", "lm", "newton", "de", "pt", "mp"

# SPECIFY THE MINIMIZERS YOU WANT TO BENCHMARK, AND AS A MINIMUM FOR THE SOFTWARE YOU SPECIFIED ABOVE
if len(sys.argv) > 1:
# Read custom minimizer options from file
software_options['minimizer_options'] = current_path + sys.argv[1]
elif custom_minimizers:
# Custom minimizer options:
software_options['minimizer_options'] = custom_minimizers
else:
# Using default minimizers from
# fitbenchmarking/fitbenchmarking/minimizers_list_default.json
software_options['minimizer_options'] = None


# Benchmark problem directories
benchmark_probs_dir = os.path.join(fitbenchmarking_folder,
'benchmark_problems')

"""
Modify results_dir to specify where the results of the fit should be saved
If left as None, they will be saved in a "results" folder in the working dir
If the full path is not given results_dir is created relative to the working dir
"""
results_dir = None

# Whether to use errors in the fitting process
use_errors = True

# Parameters of how the final tables are colored
# e.g. lower that 1.1 -> light yellow, higher than 3 -> dark red
# Change these values to suit your needs
color_scale = [(1.1, 'ranking-top-1'),
(1.33, 'ranking-top-2'),
(1.75, 'ranking-med-3'),
(3, 'ranking-low-4'),
(float('nan'), 'ranking-low-5')]

# ADD WHICH PROBLEM SETS TO TEST AGAINST HERE
# Do this, in this example file, by selecting sub-folders in benchmark_probs_dir
# "Muon_data" works for mantid minimizers
# problem_sets = ["Neutron_data", "NIST/average_difficulty"]
# problem_sets = ["CUTEst", "Muon", "Neutron", "NIST/average_difficulty", "NIST/high_difficulty", "NIST/low_difficulty"]
problem_sets = ["SAS_modelling/1D"]
for sub_dir in problem_sets:
# generate group label/name used for problem set
label = sub_dir.replace('/', '_')

# Problem data directory
data_dir = os.path.join(benchmark_probs_dir, sub_dir)

print('\nRunning the benchmarking on the {} problem set\n'.format(label))
results_per_group, results_dir = fitBenchmarking(group_name=label, software_options=software_options,
data_dir=data_dir,
use_errors=use_errors, results_dir=results_dir)

print('\nProducing output for the {} problem set\n'.format(label))
for idx, group_results in enumerate(results_per_group):
# Display the runtime and accuracy results in a table
printTables(software_options, group_results,
group_name=label, use_errors=use_errors,
color_scale=color_scale, results_dir=results_dir)

print('\nCompleted benchmarking for {} problem set\n'.format(sub_dir))
4 changes: 2 additions & 2 deletions example_scripts/example_runScripts_expert.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@
from resproc import visual_pages

# SPECIFY THE SOFTWARE/PACKAGE CONTAINING THE MINIMIZERS YOU WANT TO BENCHMARK
software = ['mantid', 'scipy']
software = ['scipy']
software_options = {'software': software}

# User defined minimizers
Expand Down Expand Up @@ -80,7 +80,7 @@
# Do this, in this example file, by selecting sub-folders in benchmark_probs_dir
# "Muon_data" works for mantid minimizers
# problem_sets = ["Neutron_data", "NIST/average_difficulty"]
problem_sets = ["Neutron_data"]
problem_sets = ["CUTEst"]
for sub_dir in problem_sets:
# generate group group_name/name used for problem set
group_name = sub_dir.replace('/', '_')
Expand Down
Loading

0 comments on commit 488fa1e

Please sign in to comment.