Skip to content

Commit

Permalink
Tutorial and docs updated to .rst
Browse files Browse the repository at this point in the history
The docs and tutorial have been updated to include the AverageLearner1D. The previous tutorial (Python notebook) has been replaced by new tutorial as .rst file.
  • Loading branch information
AlvaroGI authored and basnijholt committed Mar 23, 2021
1 parent cb48f6c commit 59454dc
Show file tree
Hide file tree
Showing 8 changed files with 234 additions and 219 deletions.
4 changes: 3 additions & 1 deletion README.rst
Expand Up @@ -44,8 +44,10 @@ The following learners are implemented:
- ``Learner1D``, for 1D functions ``f: ℝ → ℝ^N``,
- ``Learner2D``, for 2D functions ``f: ℝ^2 → ℝ^N``,
- ``LearnerND``, for ND functions ``f: ℝ^N → ℝ^M``,
- ``AverageLearner``, For stochastic functions where you want to
- ``AverageLearner``, for random variables where you want to
average the result over many evaluations,
- ``AverageLearner1D``, for stochastic 1D functions where you want to
estimate the mean value of the function at each point,
- ``IntegratorLearner``, for
when you want to intergrate a 1D function ``f: ℝ → ℝ``,
- ``BalancingLearner``, for when you want to run several learners at once,
Expand Down
4 changes: 3 additions & 1 deletion docs/source/docs.rst
Expand Up @@ -16,8 +16,10 @@ The following learners are implemented:
- `~adaptive.Learner1D`, for 1D functions ``f: ℝ → ℝ^N``,
- `~adaptive.Learner2D`, for 2D functions ``f: ℝ^2 → ℝ^N``,
- `~adaptive.LearnerND`, for ND functions ``f: ℝ^N → ℝ^M``,
- `~adaptive.AverageLearner`, For stochastic functions where you want to
- `~adaptive.AverageLearner`, for random variables where you want to
average the result over many evaluations,
- `~adaptive.AverageLearner1D`, for stochastic 1D functions where you want to
estimate the mean value of the function at each point,
- `~adaptive.IntegratorLearner`, for
when you want to intergrate a 1D function ``f: ℝ → ℝ``.

Expand Down
7 changes: 7 additions & 0 deletions docs/source/reference/adaptive.learner.average_learner1D.rst
@@ -0,0 +1,7 @@
adaptive.AverageLearner
=======================

.. autoclass:: adaptive.AverageLearner1D
:members:
:undoc-members:
:show-inheritance:
1 change: 1 addition & 0 deletions docs/source/reference/adaptive.rst
Expand Up @@ -7,6 +7,7 @@ Learners
.. toctree::

adaptive.learner.average_learner
adaptive.learner.average_learner1D
adaptive.learner.base_learner
adaptive.learner.balancing_learner
adaptive.learner.data_saver
Expand Down
116 changes: 116 additions & 0 deletions docs/source/tutorial/tutorial.AverageLearner1D.rst
@@ -0,0 +1,116 @@
Tutorial `~adaptive.AverageLearner1D`
------------------------------

.. note::
Because this documentation consists of static html, the ``live_plot``
and ``live_info`` widget is not live. Download the notebook
in order to see the real behaviour.

.. seealso::
The complete source code of this tutorial can be found in
:jupyter-download:notebook:`tutorial.AverageLearner1D`

.. jupyter-execute::
:hide-code:

import adaptive
adaptive.notebook_extension()
%config InlineBackend.figure_formats=set(['svg'])

import numpy as np
from functools import partial
import random

General use
..........................

First, we define the (noisy) function to be sampled. Note that the parameter
``sigma`` corresponds to the standard deviation of the Gaussian noise.

.. jupyter-execute::

def f(x, sigma=0, peak_width=0.05, offset=-0.5, wait=False):
from time import sleep
from random import random

if wait:
sleep(random())

function = x ** 3 - x + 3 * peak_width ** 2 / (peak_width ** 2 + (x - offset) ** 2)
return function + np.random.normal(0, sigma)

This is how the function looks in the absence of noise:

.. jupyter-execute::

import matplotlib.pyplot as plt
x = np.linspace(-2,2,500)
plt.plot(x, f(x, sigma=0));

This is how a single realization of the noisy function looks:

.. jupyter-execute::

plt.plot(x, [f(xi, sigma=1) for xi in x]);

To obtain an estimate of the mean value of the function at each point ``x``, we
take many samples at ``x`` and calculate the sample mean. The learner will
autonomously determine whether the next samples should be taken at an old
point (to improve the estimate of the mean at that point) or at a new one.

We start by initializing a 1D average learner:

.. jupyter-execute::

learner = adaptive.AverageLearner1D(
function=partial(f, sigma=1),
bounds=(-2,2))

As with other types of learners, we need to initialize a runner with a certain
goal to run our learner. In this case, we set 10000 samples as the goal (the
second condition ensures that we have at least 20 samples at each point):

.. jupyter-execute::

runner = adaptive.Runner(learner, goal=lambda l: l.total_samples >= 10000 and min(l._number_samples.values()) >= 20)
runner.live_info()
runner.live_plot(update_interval=0.1)

Fine tuning
..........................

In some cases, the default configuration of the 1D average learner can be
sub-optimal. One can then tune the internal parameters of the learner. The most
relevant are:

- ``loss_per_interval``: loss function (see Learner1D).
- ``delta``: this parameter is the most relevant and controls the balance between resampling existing points (exploitation) and sampling new ones (exploration). Its value should remain between 0 and 1 (the default value is 0.2). Large values favor the "exploration" behavior, although this can make the learner to sample noise. Small values favor the "exploitation" behavior, leading the learner to thoroughly resample existing points. In general, the optimal value of ``delta`` is between 0.1 and 0.4.
- ``neighbor_sampling``: each new point is initially sampled a fraction ``neighbor_sampling`` of the number of samples of its nearest neighbor. We recommend to keep the value of ``neighbor_sampling`` below 1 to prevent oversampling.
- ``min_samples``: minimum number of samples that are initially taken at a new point. This parameter can prevent the learner from sampling noise in case we accidentally set a too large value of ``delta``.
- ``max_samples``: maximum number of samples at each point. If a point has been sampled ``max_samples`` times, it will not be sampled again. This prevents the "exploitation" to drastically dominate over the "exploration" behavior in case we set a too small ``delta``.
- ``min_error``: minimum uncertainty at each point (this uncertainty corresponds to the standard deviation in the estimate of the mean). As ``max_samples``, this parameter can prevent the "exploitation" to drastically dominate over the "exploration" behavior.

As an example, assume that we wanted to resample the points from the previous
learner. We can decrease ``delta`` to 0.1 and set ``min_error`` to 0.05 if we do
not require accuracy beyond this value:

.. jupyter-execute::

learner.delta = 0.1
learner.min_error = 0.05

runner = adaptive.Runner(learner, goal=lambda l: l.total_samples >= 20000 and min(l._number_samples.values()) >= 20)
runner.live_info()
runner.live_plot(update_interval=0.1)

On the contrary, if we want to push forward the "exploration", we can set a larger
``delta`` and limit the maximum number of samples taken at each point:

.. jupyter-execute::

learner.delta = 0.3
learner.max_samples = 1000

runner = adaptive.Runner(learner, goal=lambda l: l.total_samples >= 25000 and min(l._number_samples.values()) >= 20)
runner.live_info()
runner.live_plot(update_interval=0.1)
1 change: 1 addition & 0 deletions docs/source/tutorial/tutorial.rst
Expand Up @@ -41,6 +41,7 @@ We recommend to start with the :ref:`Tutorial `~adaptive.Learner1D``.
tutorial.Learner2D
tutorial.custom_loss
tutorial.AverageLearner
tutorial.AverageLearner1D
tutorial.BalancingLearner
tutorial.DataSaver
tutorial.IntegratorLearner
Expand Down
103 changes: 103 additions & 0 deletions example-notebook.ipynb
Expand Up @@ -290,6 +290,109 @@
"runner.live_plot(update_interval=0.1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Average 1D learner"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"[`adaptive`](https://github.com/python-adaptive/adaptive) can also be used to sample noisy functions. The `AverageLearner1D` estimates the mean value of a 1D stochastic function by taking many samples at different points and estimating the mean value at those points.\n",
"\n",
"Let us consider the following function:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def f(x, sigma=0, peak_width=0.05, offset=-0.5, wait=False):\n",
" from time import sleep\n",
" from random import random\n",
"\n",
" if wait:\n",
" sleep(random())\n",
"\n",
" function = x ** 3 - x + 3 * peak_width ** 2 / (peak_width ** 2 + (x - offset) ** 2)\n",
" return function + np.random.normal(0, sigma)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is how the function looks in the absence of noise:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import matplotlib.pyplot as plt\n",
"x = np.linspace(-2,2,500)\n",
"plt.plot(x, f(x, sigma=0));"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"This is how a single realization of the stochastic function looks:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"plt.plot(x, [f(xi, sigma=1) for xi in x]);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The `AverageLearner1D` can be run in a similar way to the `Learner1D`:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"learner = adaptive.AverageLearner1D(function=partial(f, sigma=1), bounds=(-2,2))\n",
"\n",
"runner = adaptive.Runner(learner, goal=lambda l: l.total_samples >= 10000 \n",
" and min(l._number_samples.values()) >= 20)\n",
"runner.live_info()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The live plot shows the mean value of the function at each point and errorbars that correspond to the standard deviation on the estimate of the mean value:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"runner.live_plot(update_interval=0.1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
Expand Down

0 comments on commit 59454dc

Please sign in to comment.