Skip to content

Commit

Permalink
Merge branch 'doc' into 'master'
Browse files Browse the repository at this point in the history
add documentation

Closes #91

See merge request qt/adaptive!120
  • Loading branch information
basnijholt committed Oct 17, 2018
2 parents 3a69cd9 + eef11f0 commit 03badcd
Show file tree
Hide file tree
Showing 38 changed files with 730 additions and 270 deletions.
100 changes: 0 additions & 100 deletions README.md

This file was deleted.

155 changes: 155 additions & 0 deletions README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,155 @@
.. summary-start
.. _logo-adaptive:

|image0| adaptive
=================

|PyPI| |Conda| |Downloads| |pipeline status| |DOI| |Binder| |Join the
chat at https://gitter.im/python-adaptive/adaptive|

**Tools for adaptive parallel sampling of mathematical functions.**

``adaptive`` is an open-source Python library designed to
make adaptive parallel function evaluation simple. With ``adaptive`` you
just supply a function with its bounds, and it will be evaluated at the
“best” points in parameter space. With just a few lines of code you can
evaluate functions on a computing cluster, live-plot the data as it
returns, and fine-tune the adaptive sampling algorithm.

Check out the ``adaptive`` example notebook
`learner.ipynb <https://github.com/python-adaptive/adaptive/blob/master/learner.ipynb>`_ (or run it `live on
Binder <https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb>`_)
to see examples of how to use ``adaptive``.

.. summary-end
**WARNING: adaptive is still in a beta development stage**

.. implemented-algorithms-start
Implemented algorithms
----------------------

The core concept in ``adaptive`` is that of a *learner*. A *learner*
samples a function at the best places in its parameter space to get
maximum “information” about the function. As it evaluates the function
at more and more points in the parameter space, it gets a better idea of
where the best places are to sample next.

Of course, what qualifies as the “best places” will depend on your
application domain! ``adaptive`` makes some reasonable default choices,
but the details of the adaptive sampling are completely customizable.

The following learners are implemented:

- ``Learner1D``, for 1D functions ``f: ℝ → ℝ^N``,
- ``Learner2D``, for 2D functions ``f: ℝ^2 → ℝ^N``,
- ``LearnerND``, for ND functions ``f: ℝ^N → ℝ^M``,
- ``AverageLearner``, For stochastic functions where you want to
average the result over many evaluations,
- ``IntegratorLearner``, for
when you want to intergrate a 1D function ``f: ℝ → ℝ``,
- ``BalancingLearner``, for when you want to run several learners at once,
selecting the “best” one each time you get more points.

In addition to the learners, ``adaptive`` also provides primitives for
running the sampling across several cores and even several machines,
with built-in support for
`concurrent.futures <https://docs.python.org/3/library/concurrent.futures.html>`_,
`ipyparallel <https://ipyparallel.readthedocs.io/en/latest/>`_ and
`distributed <https://distributed.readthedocs.io/en/latest/>`_.

.. implemented-algorithms-end
Examples
--------

.. raw:: html

<img src="https://user-images.githubusercontent.com/6897215/38739170-6ac7c014-3f34-11e8-9e8f-93b3a3a3d61b.gif" width='20%'> </img> <img src="https://user-images.githubusercontent.com/6897215/35219611-ac8b2122-ff73-11e7-9332-adffab64a8ce.gif" width='40%'> </img>


Installation
------------

``adaptive`` works with Python 3.6 and higher on Linux, Windows, or Mac,
and provides optional extensions for working with the Jupyter/IPython
Notebook.

The recommended way to install adaptive is using ``conda``:

.. code:: bash
conda install -c conda-forge adaptive
``adaptive`` is also available on PyPI:

.. code:: bash
pip install adaptive[notebook]
The ``[notebook]`` above will also install the optional dependencies for
running ``adaptive`` inside a Jupyter notebook.

Development
-----------

Clone the repository and run ``setup.py develop`` to add a link to the
cloned repo into your Python path:

.. code:: bash
git clone git@github.com:python-adaptive/adaptive.git
cd adaptive
python3 setup.py develop
We highly recommend using a Conda environment or a virtualenv to manage
the versions of your installed packages while working on ``adaptive``.

In order to not pollute the history with the output of the notebooks,
please setup the git filter by executing

.. code:: bash
python ipynb_filter.py
in the repository.

Credits
-------

We would like to give credits to the following people:

- Pedro Gonnet for his implementation of `CQUAD <https://www.gnu.org/software/gsl/manual/html_node/CQUAD-doubly_002dadaptive-integration.html>`_,
“Algorithm 4” as described in “Increasing the Reliability of Adaptive
Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on
Mathematical Software, 37 (3), art. no. 26, 2010.
- Pauli Virtanen for his ``AdaptiveTriSampling`` script (no longer
available online since SciPy Central went down) which served as
inspiration for the ``~adaptive.Learner2D``.

For general discussion, we have a `Gitter chat
channel <https://gitter.im/python-adaptive/adaptive>`_. If you find any
bugs or have any feature suggestions please file a GitLab
`issue <https://gitlab.kwant-project.org/qt/adaptive/issues/new?issue>`_
or submit a `merge
request <https://gitlab.kwant-project.org/qt/adaptive/merge_requests>`_.

.. references-start
.. |image0| image:: https://gitlab.kwant-project.org/qt/adaptive/uploads/d20444093920a4a0499e165b5061d952/logo.png
.. |PyPI| image:: https://img.shields.io/pypi/v/adaptive.svg
:target: https://pypi.python.org/pypi/adaptive
.. |Conda| image:: https://anaconda.org/conda-forge/adaptive/badges/installer/conda.svg
:target: https://anaconda.org/conda-forge/adaptive
.. |Downloads| image:: https://anaconda.org/conda-forge/adaptive/badges/downloads.svg
:target: https://anaconda.org/conda-forge/adaptive
.. |pipeline status| image:: https://gitlab.kwant-project.org/qt/adaptive/badges/master/pipeline.svg
:target: https://gitlab.kwant-project.org/qt/adaptive/pipelines
.. |DOI| image:: https://zenodo.org/badge/113714660.svg
:target: https://zenodo.org/badge/latestdoi/113714660
.. |Binder| image:: https://mybinder.org/badge.svg
:target: https://mybinder.org/v2/gh/python-adaptive/adaptive/master?filepath=learner.ipynb
.. |Join the chat at https://gitter.im/python-adaptive/adaptive| image:: https://img.shields.io/gitter/room/nwjs/nw.js.svg
:target: https://gitter.im/python-adaptive/adaptive
.. references-end
8 changes: 4 additions & 4 deletions adaptive/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,15 +8,15 @@
from . import runner
from . import utils

from .learner import (Learner1D, Learner2D, LearnerND, AverageLearner,
BalancingLearner, make_datasaver, DataSaver,
IntegratorLearner)
from .learner import (BaseLearner, Learner1D, Learner2D, LearnerND,
AverageLearner, BalancingLearner, make_datasaver,
DataSaver, IntegratorLearner)

with suppress(ImportError):
# Only available if 'scikit-optimize' is installed
from .learner import SKOptLearner

from .runner import Runner, BlockingRunner
from .runner import Runner, AsyncRunner, BlockingRunner

from ._version import __version__
del _version
Expand Down
4 changes: 2 additions & 2 deletions adaptive/learner/average_learner.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ class AverageLearner(BaseLearner):
Parameters
----------
atol : float
Desired absolute tolerance
Desired absolute tolerance.
rtol : float
Desired relative tolerance
Desired relative tolerance.
Attributes
----------
Expand Down
37 changes: 24 additions & 13 deletions adaptive/learner/balancing_learner.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,27 +21,32 @@ class BalancingLearner(BaseLearner):
Parameters
----------
learners : sequence of BaseLearner
learners : sequence of `BaseLearner`
The learners from which to choose. These must all have the same type.
cdims : sequence of dicts, or (keys, iterable of values), optional
Constant dimensions; the parameters that label the learners. Used
in `plot`.
Example inputs that all give identical results:
- sequence of dicts:
>>> cdims = [{'A': True, 'B': 0},
... {'A': True, 'B': 1},
... {'A': False, 'B': 0},
... {'A': False, 'B': 1}]`
- tuple with (keys, iterable of values):
>>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1]))
>>> cdims = (['A', 'B'], [(True, 0), (True, 1),
... (False, 0), (False, 1)])
strategy : 'loss_improvements' (default), 'loss', or 'npoints'
The points that the 'BalancingLearner' choses can be either based on:
The points that the `BalancingLearner` choses can be either based on:
the best 'loss_improvements', the smallest total 'loss' of the
child learners, or the number of points per learner, using 'npoints'.
One can dynamically change the strategy while the simulation is
running by changing the 'learner.strategy' attribute.
running by changing the ``learner.strategy`` attribute.
Notes
-----
Expand All @@ -50,7 +55,7 @@ class BalancingLearner(BaseLearner):
compared*. For the moment we enforce this restriction by requiring that
all learners are the same type but (depending on the internals of the
learner) it may be that the loss cannot be compared *even between learners
of the same type*. In this case the BalancingLearner will behave in an
of the same type*. In this case the `BalancingLearner` will behave in an
undefined way.
"""

Expand Down Expand Up @@ -183,28 +188,34 @@ def plot(self, cdims=None, plotter=None, dynamic=True):
cdims : sequence of dicts, or (keys, iterable of values), optional
Constant dimensions; the parameters that label the learners.
Example inputs that all give identical results:
- sequence of dicts:
>>> cdims = [{'A': True, 'B': 0},
... {'A': True, 'B': 1},
... {'A': False, 'B': 0},
... {'A': False, 'B': 1}]`
- tuple with (keys, iterable of values):
>>> cdims = (['A', 'B'], itertools.product([True, False], [0, 1]))
>>> cdims = (['A', 'B'], [(True, 0), (True, 1),
... (False, 0), (False, 1)])
plotter : callable, optional
A function that takes the learner as a argument and returns a
holoviews object. By default learner.plot() will be called.
holoviews object. By default ``learner.plot()`` will be called.
dynamic : bool, default True
Return a holoviews.DynamicMap if True, else a holoviews.HoloMap.
The DynamicMap is rendered as the sliders change and can therefore
not be exported to html. The HoloMap does not have this problem.
Return a `holoviews.core.DynamicMap` if True, else a
`holoviews.core.HoloMap`. The `~holoviews.core.DynamicMap` is
rendered as the sliders change and can therefore not be exported
to html. The `~holoviews.core.HoloMap` does not have this problem.
Returns
-------
dm : holoviews.DynamicMap object (default) or holoviews.HoloMap object
A DynamicMap (dynamic=True) or HoloMap (dynamic=False) with
sliders that are defined by 'cdims'.
dm : `holoviews.core.DynamicMap` (default) or `holoviews.core.HoloMap`
A `DynamicMap` (dynamic=True) or `HoloMap` (dynamic=False) with
sliders that are defined by `cdims`.
"""
hv = ensure_holoviews()
cdims = cdims or self._cdims_default
Expand Down Expand Up @@ -248,13 +259,13 @@ def remove_unfinished(self):
def from_product(cls, f, learner_type, learner_kwargs, combos):
"""Create a `BalancingLearner` with learners of all combinations of
named variables’ values. The `cdims` will be set correctly, so calling
`learner.plot` will be a `holoviews.HoloMap` with the correct labels.
`learner.plot` will be a `holoviews.core.HoloMap` with the correct labels.
Parameters
----------
f : callable
Function to learn, must take arguments provided in in `combos`.
learner_type : BaseLearner
learner_type : `BaseLearner`
The learner that should wrap the function. For example `Learner1D`.
learner_kwargs : dict
Keyword argument for the `learner_type`. For example `dict(bounds=[0, 1])`.
Expand Down

0 comments on commit 03badcd

Please sign in to comment.