Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

219 move across wiki #254

Merged
merged 10 commits into from
Sep 25, 2019
Binary file added docs/images/AppendingPath.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/images/PathVariable.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
5 changes: 0 additions & 5 deletions docs/source/contributors/examples.rst

This file was deleted.

76 changes: 76 additions & 0 deletions docs/source/contributors/extending_fitbenchmarking.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
.. _extending-fitbenchmarking:

Extending Fitbenchmarking
=========================

.. _problem-groups:

Adding additional problem groups
--------------------------------

*This section describes how to add a problem group to the fit benchmarking
software. The default problem groups that come with this software are,
at the moment of writing this, neutron, NIST, CUTEst and Muon.*

1. Add your problem file directory in
``fitbenchmarking/benchmark_problems/``. Some examples of how this
should look like are available in the same dir.

2. Modify ``example_scripts/example_runScripts.py`` to run new problem
set in ``fitbenchmarking/benchmark_problems/``.


.. _problem-types:

Adding additional fitting problem definition types
--------------------------------------------------

**Fitting problem definition types currently supported**

At the time of writing, the types (formats) that are currently supported
are:

- Native (Fitbenchmark)
- NIST

An example of the native and NIST formats can be seen in
``benchmark_problems/Neutron_data/`` and ``benchmark_problems/NIST/``,
respectively.

**Adding new fitting problem definition types**

Follow the following steps

1. Create ``parse_{type}_data.py`` which
contains a child class of ``BaseFittingProblem`` in
``parsing/base_fitting_problem.py`` that processes the type (format) and
initialise the class with appropriate attributes (examples can be found
in ``parse_{nist/fitbenchmark}_data.py``)
2. In ``parsing/parse.py``
alter the function ``determine_problem_type()`` such that it determines
the new type
3. In ``parsing/parse.py`` add a new if statement to
``parse_problem_file()`` to call the user defined
``parse_{type}_data.py`` class

.. _fitting_software:

Adding additional fitting software
----------------------------------
*This section describes how to add additional software to benchmark against
the available problems. The steps below should be used as orientation as
there is no straight forward way to adding a software to fitBenchmarking
at the moment.*

1. In the ``fitbenchmarking/fitbenchmarking/`` folder, add an extra
``elif`` for your software in the following functions:

- fitbenchmarking_one_problem.py -> fit_one_function_def
- fitting/plotting/plots.py -> get_start_guess_data
- fitting/prerequisites.py -> prepare_software_prerequisites

2. In the folder ``fitbenchmarking/fitbenchmarking/fitting/`` create a
python script that deals with the specifics of your algorithm. There
are examples for the scipy and mantid fitting algorithms.

3. For additional support please see :ref:`getting-started`.
5 changes: 0 additions & 5 deletions docs/source/contributors/getting_started.rst

This file was deleted.

4 changes: 2 additions & 2 deletions docs/source/contributors/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,6 @@ Here you will find all you need in order to get started.
:caption: Contents:

guidelines
getting_started
examples
extending_fitbenchmarking
logging

40 changes: 40 additions & 0 deletions docs/source/contributors/logging.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
.. _logging:

Logging
=======

Logging is the process of tracking certain events that occur when the
software is running. FitBenchmaring uses the logging tool included in
the Python standard library. Logging calls can be included in different
parts of the code to indicate the occurance of interested events. Such
events could be categorised into different levels depending on their
types. For instance, events that occur during the normal opertation of a
program are considered to be level “INFO” while software errors are of
level “ERROR”. Logging is done in a form of a text message that could
contain the timestamp and/or level of each event.

The benefit of the logging API is that all modules in Python can
contribute to the logging. In other words, it means that your
software/application log can include its own log messages as well as log
messages from any external or third-party modules.

Python Logging Documentation and Tutorials
------------------------------------------

The URL below leads to the official Python website that contains
detailed descriptions and available commands for logging, in addition to
useful tutorials on how to perform logging:

https://docs.python.org/2/library/logging.html#logrecord-attributes

Be aware of this
----------------

- Logging can increase the runtime of the program significantly. It is
best to avoid using it in large and nested loops
- Log messages can be sent to multiple destinations. Some destinations
are dedicated to the software developers while some are for users. It
is important to set the level for each destination as some
destination might not require low-level log messages. More
information on this can be found
`here <https://docs.python.org/3/howto/logging-cookbook.html#logging-to-multiple-destinations>`__.
52 changes: 50 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,57 @@ Welcome to FitBenchmarking's documentation!

.. toctree::
:maxdepth: 1
:caption: Docs:


Users <users/index>
Contributors <contributors/index>

FitBenchmarking is an open source tool for comparing different
minimizers/fitting frameworks based on their accuracy and runtimes.

FitBenchmarking is cross platform and should install and run on Windows,
Linux and Mac OS. As of this writing the tool does not have instructions
and/or code for setting it up to run on specialised hardware.

For questions, requests etc. don’t hesitate to contact us on
fitbenchmarking.supp@gmail.com.

Content on this wiki
--------------------

Almost all content on this wiki is for developers with the exception of
the Getting Started page.

What the tool does
------------------

The tool creates a table/tables that shows a comparison between the
different minimizers available in a fitting software (e.g. scipy or
mantid), based on their accuracy and/or runtimes. An example of a table
is:

.. figure:: ../images/example_table.png
:alt: Example Table

This is the result of fitbenchmarking mantid on a set of neutron data.
The results are normalised with respect to the best minimizer per
problem. The problem names link to html pages that display plots of the
data and the fit that was performed, together with initial and final
values of the parameters. Here is an example of the final plot fit.

.. figure:: ../images/example_plot.png
:alt: Example Plot


Currently Benchmarking
----------------------

.. image:: https://avatars0.githubusercontent.com/u/671496?s=400&v=4
:alt: Mantid
:height: 100px
:target: http://www.mantidproject.org/Main_Page

.. image:: http://gracca.github.io/images/python-scipy.png
:alt: SciPy
:height: 100px
:target: https://www.scipy.org/

36 changes: 34 additions & 2 deletions docs/source/users/examples.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,37 @@
.. _examples:

Examples
========

This page is still in progress.
Feel free to add any useful documentation.
The file ``example_runScript.py`` can be found in the
``fitbenchmarking/example_scripts/`` directory. It was written to provide
any potential users with an example of how FitBenchmarking is run.

To run this script using its basic settings, please follow
the :ref:`getting-started`
page. This page is
for giving an overview of how the script works.

The benchmarking starts by considering two problem sets (neutron and
NIST). Each provided problem is fitted using all the available
minimizers in `Mantid <http://www.mantidproject.org/Main_Page>`__, a
comprehensive data analysis software. FitBenchmarking records the time
it took for a certain minimizer to solve a certain fitting problem.
Additionally, the accuracy of the solution is also recorded by
performing a
`chi-squared <https://en.wikipedia.org/wiki/Chi-squared_test>`__ test of
the fit. After running through all the problems, accuracy and runtimes
tables are created for each problem set. In essence, there will be two
tables for neutron data and two for NIST data. These tables are saved in
the ``fitbenchmarking/example_scripts/results`` folder.

The final result table for neutron looks like this:

.. figure:: ../../images/example_table.png
:alt: Result Table

Result Table

The ``example_runScript.py`` file is heavily commented. If you want to
learn more about how it works and how can he modifty it, please consult
the file itself using a text editor.