Skip to content

Commit

Permalink
Rewrite parts of README, reorder sections, and add features section (#…
Browse files Browse the repository at this point in the history
…400)

* Update README

* Fix emojis

* rename tags

* fix emojis

* Fix

* add Tutorial emoji

* Add extra text to examples

* no zoom on scroll
  • Loading branch information
basnijholt committed Apr 29, 2023
1 parent 429c4bd commit 0936a93
Show file tree
Hide file tree
Showing 14 changed files with 185 additions and 80 deletions.
10 changes: 10 additions & 0 deletions .github/workflows/toc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
on: push
name: TOC Generator
jobs:
generateTOC:
name: TOC Generator
runs-on: ubuntu-latest
steps:
- uses: technote-space/toc-generator@v4
with:
TOC_TITLE: ""
2 changes: 1 addition & 1 deletion AUTHORS.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
## Authors
## 👥 Authors

The current maintainers of Adaptive are:

Expand Down
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Changelog
# 🗞️ Changelog

## [v0.15.0](https://github.com/python-adaptive/adaptive/tree/v0.15.0) (2022-11-30)

Expand Down
146 changes: 99 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
<!-- badges-start -->

# ![logo](https://adaptive.readthedocs.io/en/latest/_static/logo.png) adaptive
# ![logo](https://adaptive.readthedocs.io/en/latest/_static/logo.png) *Adaptive*: Parallel Active Learning of Mathematical Functions :brain::1234:
<!-- badges-start -->

[![Binder](https://mybinder.org/badge.svg)](https://mybinder.org/v2/gh/python-adaptive/adaptive/main?filepath=example-notebook.ipynb)
[![Conda](https://img.shields.io/badge/install%20with-conda-green.svg)](https://anaconda.org/conda-forge/adaptive)
Expand All @@ -13,56 +13,57 @@
[![Pipeline-status](https://dev.azure.com/python-adaptive/adaptive/_apis/build/status/python-adaptive.adaptive?branchName=main)](https://dev.azure.com/python-adaptive/adaptive/_build/latest?definitionId=6?branchName=main)
[![PyPI](https://img.shields.io/pypi/v/adaptive.svg)](https://pypi.python.org/pypi/adaptive)

> *Adaptive*: parallel active learning of mathematical functions.
<!-- badges-end -->

<!-- summary-start -->

`adaptive` is an open-source Python library designed to make adaptive parallel function evaluation simple. With `adaptive` you just supply a function with its bounds, and it will be evaluated at the “best” points in parameter space, rather than unnecessarily computing *all* points on a dense grid.
With just a few lines of code you can evaluate functions on a computing cluster, live-plot the data as it returns, and fine-tune the adaptive sampling algorithm.
Adaptive is an open-source Python library that streamlines adaptive parallel function evaluations.
Rather than calculating all points on a dense grid, it intelligently selects the "best" points in the parameter space based on your provided function and bounds.
With minimal code, you can perform evaluations on a computing cluster, display live plots, and optimize the adaptive sampling algorithm.

`adaptive` excels on computations where each function evaluation takes *at least* ≈50ms due to the overhead of picking potentially interesting points.
Adaptive is most efficient for computations where each function evaluation takes at least ≈50ms due to the overhead of selecting potentially interesting points.

Run the `adaptive` example notebook [live on Binder](https://mybinder.org/v2/gh/python-adaptive/adaptive/main?filepath=example-notebook.ipynb) to see examples of how to use `adaptive` or visit the [tutorial on Read the Docs](https://adaptive.readthedocs.io/en/latest/tutorial/tutorial.html).
To see Adaptive in action, try the [example notebook on Binder](https://mybinder.org/v2/gh/python-adaptive/adaptive/main?filepath=example-notebook.ipynb) or explore the [tutorial on Read the Docs](https://adaptive.readthedocs.io/en/latest/tutorial/tutorial.html).

<!-- summary-end -->

## Implemented algorithms
<details><summary><b><u>[ToC]</u></b> 📚</summary>

The core concept in `adaptive` is that of a *learner*.
A *learner* samples a function at the best places in its parameter space to get maximum “information” about the function.
As it evaluates the function at more and more points in the parameter space, it gets a better idea of where the best places are to sample next.
<!-- START doctoc generated TOC please keep comment here to allow auto update -->
<!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->

Of course, what qualifies as the “best places” will depend on your application domain! `adaptive` makes some reasonable default choices, but the details of the adaptive sampling are completely customizable.
- [:star: Key features](#star-key-features)
- [:rocket: Example usage](#rocket-example-usage)
- [:floppy_disk: Exporting Data](#floppy_disk-exporting-data)
- [:test_tube: Implemented Algorithms](#test_tube-implemented-algorithms)
- [:package: Installation](#package-installation)
- [:wrench: Development](#wrench-development)
- [:books: Citing](#books-citing)
- [:page_facing_up: Draft Paper](#page_facing_up-draft-paper)
- [:sparkles: Credits](#sparkles-credits)

The following learners are implemented:
<!-- END doctoc generated TOC please keep comment here to allow auto update -->

<!-- not-in-documentation-start -->
</details>

- `Learner1D`, for 1D functions `f: ℝ → ℝ^N`,
- `Learner2D`, for 2D functions `f: ℝ^2 → ℝ^N`,
- `LearnerND`, for ND functions `f: ℝ^N → ℝ^M`,
- `AverageLearner`, for random variables where you want to average the result over many evaluations,
- `AverageLearner1D`, for stochastic 1D functions where you want to estimate the mean value of the function at each point,
- `IntegratorLearner`, for when you want to intergrate a 1D function `f: ℝ → ℝ`.
- `BalancingLearner`, for when you want to run several learners at once, selecting the “best” one each time you get more points.
<!-- key-features-start -->

Meta-learners (to be used with other learners):
## :star: Key features

- `BalancingLearner`, for when you want to run several learners at once, selecting the “best” one each time you get more points,
- `DataSaver`, for when your function doesn't just return a scalar or a vector.
- 🎯 **Intelligent Adaptive Sampling**: Adaptive focuses on areas of interest within a function, ensuring better results with fewer evaluations, saving time, and computational resources.
-**Parallel Execution**: The library leverages parallel processing for faster function evaluations, making optimal use of available computational resources.
- 📊 **Live Plotting and Info Widgets**: When working in Jupyter notebooks, Adaptive offers real-time visualization of the learning process, making it easier to monitor progress and identify areas of improvement.
- 🔧 **Customizable Loss Functions**: Adaptive supports various loss functions and allows customization, enabling users to tailor the learning process according to their specific needs.
- 📈 **Support for Multidimensional Functions**: The library can handle functions with scalar or vector outputs in one or multiple dimensions, providing flexibility for a wide range of problems.
- 🧩 **Seamless Integration**: Adaptive offers a simple and intuitive interface, making it easy to integrate with existing Python projects and workflows.
- 💾 **Flexible Data Export**: The library provides options to export learned data as NumPy arrays or Pandas DataFrames, ensuring compatibility with various data processing tools.
- 🌐 **Open-Source and Community-Driven**: Adaptive is an open-source project, encouraging contributions from the community to continuously improve and expand the library's features and capabilities.

In addition to the learners, `adaptive` also provides primitives for running the sampling across several cores and even several machines, with built-in support for
[concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html),
[mpi4py](https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html),
[loky](https://loky.readthedocs.io/en/stable/),
[ipyparallel](https://ipyparallel.readthedocs.io/en/latest/), and
[distributed](https://distributed.readthedocs.io/en/latest/).
<!-- key-features-end -->

## Examples
## :rocket: Example usage

Adaptively learning a 1D function (the `gif` below) and live-plotting the process in a Jupyter notebook is as easy as
Adaptively learning a 1D function and live-plotting the process in a Jupyter notebook:

```python
from adaptive import notebook_extension, Runner, Learner1D
Expand All @@ -82,9 +83,58 @@ runner.live_plot()

<img src="https://user-images.githubusercontent.com/6897215/38739170-6ac7c014-3f34-11e8-9e8f-93b3a3a3d61b.gif" width='20%'> </img> <img src="https://user-images.githubusercontent.com/6897215/35219611-ac8b2122-ff73-11e7-9332-adffab64a8ce.gif" width='40%'> </img> <img src="https://user-images.githubusercontent.com/6897215/47256441-d6d53700-d480-11e8-8224-d1cc49dbdcf5.gif" width='20%'> </img>

<!-- not-in-documentation-end -->
### :floppy_disk: Exporting Data

You can export the learned data as a NumPy array:

```python
data = learner.to_numpy()
```

If you have Pandas installed, you can also export the data as a DataFrame:

```python
df = learner.to_dataframe()
```

<!-- implemented-algorithms-start -->

## :test_tube: Implemented Algorithms

## Installation
The core concept in `adaptive` is the *learner*.
A *learner* samples a function at the most interesting locations within its parameter space, allowing for optimal sampling of the function.
As the function is evaluated at more points, the learner improves its understanding of the best locations to sample next.

The definition of the "best locations" depends on your application domain.
While `adaptive` provides sensible default choices, the adaptive sampling process can be fully customized.

The following learners are implemented:

<!-- implemented-algorithms-end -->

- `Learner1D`: for 1D functions `f: ℝ → ℝ^N`,
- `Learner2D`: for 2D functions `f: ℝ^2 → ℝ^N`,
- `LearnerND`: for ND functions `f: ℝ^N → ℝ^M`,
- `AverageLearner`: for random variables, allowing averaging of results over multiple evaluations,
- `AverageLearner1D`: for stochastic 1D functions, estimating the mean value at each point,
- `IntegratorLearner`: for integrating a 1D function `f: ℝ → ℝ`,
- `BalancingLearner`: for running multiple learners simultaneously and selecting the "best" one as more points are gathered.

Meta-learners (to be used with other learners):

- `BalancingLearner`: for running several learners at once, selecting the "most optimal" one each time you get more points,
- `DataSaver`: for when your function doesn't return just a scalar or a vector.

In addition to learners, `adaptive` offers primitives for parallel sampling across multiple cores or machines, with built-in support for:
[concurrent.futures](https://docs.python.org/3/library/concurrent.futures.html),
[mpi4py](https://mpi4py.readthedocs.io/en/stable/mpi4py.futures.html),
[loky](https://loky.readthedocs.io/en/stable/),
[ipyparallel](https://ipyparallel.readthedocs.io/en/latest/), and
[distributed](https://distributed.readthedocs.io/en/latest/).

<!-- rest-start -->

## :package: Installation

`adaptive` works with Python 3.7 and higher on Linux, Windows, or Mac, and provides optional extensions for working with the Jupyter/IPython Notebook.

Expand All @@ -109,7 +159,7 @@ jupyter labextension install @jupyter-widgets/jupyterlab-manager
jupyter labextension install @pyviz/jupyterlab_pyviz
```

## Development
## :wrench: Development

Clone the repository and run `pip install -e ".[notebook,testing,other]"` to add a link to the cloned repo into your Python path:

Expand All @@ -119,25 +169,25 @@ cd adaptive
pip install -e ".[notebook,testing,other]"
```

We highly recommend using a Conda environment or a virtualenv to manage the versions of your installed packages while working on `adaptive`.
We recommend using a Conda environment or a virtualenv for package management during Adaptive development.

In order to not pollute the history with the output of the notebooks, please setup the git filter by executing
To avoid polluting the history with notebook output, set up the git filter by running:

```bash
python ipynb_filter.py
```

in the repository.

We implement several other checks in order to maintain a consistent code style. We do this using [pre-commit](https://pre-commit.com), execute
To maintain consistent code style, we use [pre-commit](https://pre-commit.com). Install it by running:

```bash
pre-commit install
```

in the repository.

## Citing
## :books: Citing

If you used Adaptive in a scientific work, please cite it as follows.

Expand All @@ -151,17 +201,19 @@ If you used Adaptive in a scientific work, please cite it as follows.
}
```

## Credits
## :page_facing_up: Draft Paper

If you're interested in the scientific background and principles behind Adaptive, we recommend taking a look at the [draft paper](https://github.com/python-adaptive/paper) that is currently being written.
This paper provides a comprehensive overview of the concepts, algorithms, and applications of the Adaptive library.

## :sparkles: Credits

We would like to give credits to the following people:

- Pedro Gonnet for his implementation of [CQUAD](https://www.gnu.org/software/gsl/manual/html_node/CQUAD-doubly_002dadaptive-integration.html), “Algorithm 4” as described in “Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants”, P. Gonnet, ACM Transactions on Mathematical Software, 37 (3), art. no. 26, 2010.
- Pauli Virtanen for his `AdaptiveTriSampling` script (no longer available online since SciPy Central went down) which served as inspiration for the `adaptive.Learner2D`.

<!-- credits-end -->

For general discussion, we have a [Gitter chat channel](https://gitter.im/python-adaptive/adaptive). If you find any bugs or have any feature suggestions please file a GitHub [issue](https://github.com/python-adaptive/adaptive/issues/new) or submit a [pull request](https://github.com/python-adaptive/adaptive/pulls).

<!-- references-start -->
<!-- rest-end -->

<!-- references-end -->
For general discussion, we have a [Gitter chat channel](https://gitter.im/python-adaptive/adaptive).
If you find any bugs or have any feature suggestions please file a GitHub [issue](https://github.com/python-adaptive/adaptive/issues/new) or submit a [pull request](https://github.com/python-adaptive/adaptive/pulls).
1 change: 1 addition & 0 deletions docs/environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -24,3 +24,4 @@ dependencies:
- furo=2023.3.27
- myst-parser=0.18.1
- dask=2023.3.2
- emoji=2.2.0
Binary file modified docs/source/_static/logo_docs.png
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
33 changes: 26 additions & 7 deletions docs/source/algorithms_and_examples.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,10 +10,10 @@ kernelspec:
name: python3
---

```{include} ../../README.md
```{include} ../README.md
---
start-after: <!-- summary-end -->
end-before: <!-- not-in-documentation-start -->
start-after: <!-- implemented-algorithms-start -->
end-before: <!-- implemented-algorithms-end -->
---
```

Expand All @@ -37,7 +37,7 @@ In addition to the learners, `adaptive` also provides primitives for running the
[ipyparallel](https://ipyparallel.readthedocs.io/en/latest/), and
[distributed](https://distributed.readthedocs.io/en/latest/).

# Examples
# 💡 Examples

Here are some examples of how Adaptive samples vs. homogeneous sampling.
Click on the *Play* {fa}`play` button or move the sliders.
Expand All @@ -59,6 +59,9 @@ hv.output(holomap="scrubber")

## {class}`adaptive.Learner1D`

The `Learner1D` class is designed for adaptively learning 1D functions of the form `f: ℝ → ℝ^N`. It focuses on sampling points where the function is less well understood to improve the overall approximation.
This learner is well-suited for functions with localized features or varying degrees of complexity across the domain.

Adaptively learning a 1D function (the plot below) and live-plotting the process in a Jupyter notebook is as easy as

```python
Expand Down Expand Up @@ -86,6 +89,11 @@ runner.live_plot()
```{code-cell} ipython3
:tags: [hide-input]
from bokeh.models import WheelZoomTool
wheel_zoom = WheelZoomTool(zoom_on_axis=False)
def f(x, offset=0.07357338543088588):
a = 0.01
return x + a**2 / (a**2 + (x - offset) ** 2)
Expand Down Expand Up @@ -115,11 +123,14 @@ def get_hm(loss_per_interval, N=101):
plot_homo = get_hm(uniform_loss).relabel("homogeneous sampling")
plot_adaptive = get_hm(default_loss).relabel("with adaptive")
layout = plot_homo + plot_adaptive
layout.opts(toolbar=None)
layout.opts(hv.opts.Scatter(active_tools=["box_zoom", wheel_zoom]))
```

## {class}`adaptive.Learner2D`

The `Learner2D` class is tailored for adaptively learning 2D functions of the form `f: ℝ^2 → ℝ^N`. Similar to `Learner1D`, it concentrates on sampling points with higher uncertainty to provide a better approximation.
This learner is ideal for functions with complex features or varying behavior across a 2D domain.

```{code-cell} ipython3
:tags: [hide-input]
Expand Down Expand Up @@ -147,11 +158,15 @@ def plot_compare(learner, npoints):
learner = adaptive.Learner2D(ring, bounds=[(-1, 1), (-1, 1)])
plots = {n: plot_compare(learner, n) for n in range(4, 1010, 20)}
hv.HoloMap(plots, kdims=["npoints"]).collate()
plot = hv.HoloMap(plots, kdims=["npoints"]).collate()
plot.opts(hv.opts.Image(active_tools=[wheel_zoom]))
```

## {class}`adaptive.AverageLearner`

The `AverageLearner` class is designed for situations where you want to average the result of a function over multiple evaluations.
This is particularly useful when working with random variables or stochastic functions, as it helps to estimate the mean value of the function.

```{code-cell} ipython3
:tags: [hide-input]
Expand All @@ -172,11 +187,15 @@ def plot_avg(learner, npoints):
plots = {n: plot_avg(learner, n) for n in range(10, 10000, 200)}
hv.HoloMap(plots, kdims=["npoints"])
hm = hv.HoloMap(plots, kdims=["npoints"])
hm.opts(hv.opts.Histogram(active_tools=[wheel_zoom]))
```

## {class}`adaptive.LearnerND`

The `LearnerND` class is intended for adaptively learning ND functions of the form `f: ℝ^N → ℝ^M`.
It extends the adaptive learning capabilities of the 1D and 2D learners to functions with more dimensions, allowing for efficient exploration of complex, high-dimensional spaces.

```{code-cell} ipython3
:tags: [hide-input]
Expand Down
32 changes: 27 additions & 5 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,19 @@

import os
import sys
from pathlib import Path

package_path = os.path.abspath("../..")
package_path = Path("../..").resolve()
# Insert into sys.path so that we can import adaptive here
sys.path.insert(0, package_path)
sys.path.insert(0, str(package_path))
# Insert into PYTHONPATH so that jupyter-sphinx will pick it up
os.environ["PYTHONPATH"] = ":".join((package_path, os.environ.get("PYTHONPATH", "")))
os.environ["PYTHONPATH"] = ":".join(
(str(package_path), os.environ.get("PYTHONPATH", "")),
)
# Insert `docs/` such that we can run the logo scripts
docs_path = os.path.abspath("..")
sys.path.insert(1, docs_path)
docs_path = Path("..").resolve()
sys.path.insert(1, str(docs_path))


import adaptive # noqa: E402, isort:skip

Expand Down Expand Up @@ -79,5 +83,23 @@
nb_execution_raise_on_error = True


def replace_named_emojis(input_file: Path, output_file: Path) -> None:
"""Replace named emojis in a file with unicode emojis."""
import emoji

with input_file.open("r") as infile:
content = infile.read()
content_with_emojis = emoji.emojize(content, language="alias")

with output_file.open("w") as outfile:
outfile.write(content_with_emojis)


# Call the function to replace emojis in the README.md file
input_file = package_path / "README.md"
output_file = docs_path / "README.md"
replace_named_emojis(input_file, output_file)


def setup(app):
app.add_css_file("custom.css") # For the `live_info` widget

0 comments on commit 0936a93

Please sign in to comment.