Skip to content
master
Switch branches/tags
Code

Latest commit

If a function evaluates over a point where it is not defined, NaN will be returned.
In the case of a function R -> R^n, it caused `_update_scale` to compute `self._scale[1] = nan`,
which in turns led to the evaluation of the function on all other points to be NaN, causing an
infinite loop, as reported in issue #339.

Using `np.nanmin` and `np.nanmax` in place of `np.min` and `np.max` seems to solve the issue.
8d4263a

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
Aug 31, 2018

logo adaptive

PyPI Conda Downloads Pipeline status DOI Binder Gitter Documentation Coverage GitHub

Adaptive: parallel active learning of mathematical functions.

adaptive is an open-source Python library designed to make adaptive parallel function evaluation simple. With adaptive you just supply a function with its bounds, and it will be evaluated at the “best” points in parameter space, rather than unnecessarily computing all points on a dense grid. With just a few lines of code you can evaluate functions on a computing cluster, live-plot the data as it returns, and fine-tune the adaptive sampling algorithm.

adaptive shines on computations where each evaluation of the function takes at least ≈100ms due to the overhead of picking potentially interesting points.

Run the adaptive example notebook live on Binder to see examples of how to use adaptive or visit the tutorial on Read the Docs.

Implemented algorithms

The core concept in adaptive is that of a learner. A learner samples a function at the best places in its parameter space to get maximum “information” about the function. As it evaluates the function at more and more points in the parameter space, it gets a better idea of where the best places are to sample next.

Of course, what qualifies as the “best places” will depend on your application domain! adaptive makes some reasonable default choices, but the details of the adaptive sampling are completely customizable.

The following learners are implemented:

  • Learner1D, for 1D functions f: ℝ → ℝ^N,
  • Learner2D, for 2D functions f: ℝ^2 → ℝ^N,
  • LearnerND, for ND functions f: ℝ^N → ℝ^M,
  • AverageLearner, for random variables where you want to average the result over many evaluations,
  • AverageLearner1D, for stochastic 1D functions where you want to estimate the mean value of the function at each point,
  • IntegratorLearner, for when you want to intergrate a 1D function f: ℝ → ℝ.
  • BalancingLearner, for when you want to run several learners at once, selecting the “best” one each time you get more points.

Meta-learners (to be used with other learners):

  • BalancingLearner, for when you want to run several learners at once, selecting the “best” one each time you get more points,
  • DataSaver, for when your function doesn't just return a scalar or a vector.

In addition to the learners, adaptive also provides primitives for running the sampling across several cores and even several machines, with built-in support for concurrent.futures, mpi4py, loky, ipyparallel and distributed.

Examples

Adaptively learning a 1D function (the gif below) and live-plotting the process in a Jupyter notebook is as easy as

from adaptive import notebook_extension, Runner, Learner1D
notebook_extension()

def peak(x, a=0.01):
    return x + a**2 / (a**2 + x**2)

learner = Learner1D(peak, bounds=(-1, 1))
runner = Runner(learner, goal=lambda l: l.loss() < 0.01)
runner.live_info()
runner.live_plot()

Installation

adaptive works with Python 3.7 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook.

The recommended way to install adaptive is using conda:

conda install -c conda-forge adaptive

adaptive is also available on PyPI:

pip install adaptive[notebook]

The [notebook] above will also install the optional dependencies for running adaptive inside a Jupyter notebook.

To use Adaptive in Jupyterlab, you need to install the following labextensions.

jupyter labextension install @jupyter-widgets/jupyterlab-manager
jupyter labextension install @pyviz/jupyterlab_pyviz

Development

Clone the repository and run setup.py develop to add a link to the cloned repo into your Python path:

git clone git@github.com:python-adaptive/adaptive.git
cd adaptive
python3 setup.py develop

We highly recommend using a Conda environment or a virtualenv to manage the versions of your installed packages while working on adaptive.

In order to not pollute the history with the output of the notebooks, please setup the git filter by executing

python ipynb_filter.py

in the repository.

We implement several other checks in order to maintain a consistent code style. We do this using pre-commit, execute

pre-commit install

in the repository.

Citing

If you used Adaptive in a scientific work, please cite it as follows.

@misc{Nijholt2019,
  doi = {10.5281/zenodo.1182437},
  author = {Bas Nijholt and Joseph Weston and Jorn Hoofwijk and Anton Akhmerov},
  title = {\textit{Adaptive}: parallel active learning of mathematical functions},
  publisher = {Zenodo},
  year = {2019}
}

Credits

We would like to give credits to the following people:

  • Pedro Gonnet for his implementation of CQUAD, “Algorithm 4” as described in “Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010.
  • Pauli Virtanen for his AdaptiveTriSampling script (no longer available online since SciPy Central went down) which served as inspiration for the ~adaptive.Learner2D.

For general discussion, we have a Gitter chat channel. If you find any bugs or have any feature suggestions please file a GitHub issue or submit a pull request.