Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release v0.10.2 #1046

Merged
merged 56 commits into from
Jul 24, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
56 commits
Select commit Hold shift + click to select a range
7a61a13
Merge pull request #1010 from Libensemble/release/v_0.10.0
jlnav Jun 6, 2023
1321c72
add +dev to version, plus tiny version fixes
jlnav Jun 6, 2023
926a35b
Make sure workflow parent dirs are created
AngelFP Jun 7, 2023
cb44513
Merge pull request #1017 from AngelFP/workflow_parent_dirs
jlnav Jun 12, 2023
7981a2a
casting more input-paths to absolute paths, testing intermediate dire…
jlnav Jun 12, 2023
51ce4a6
differentiator to test workflow dir
jlnav Jun 13, 2023
85a9678
Merge pull request #1020 from Libensemble/testing/workflow_dir_fixes
jlnav Jun 13, 2023
0d68af7
Update README.rst
wildsm Jun 13, 2023
e1eccc3
Fix typo in output logging docs
AngelFP Jun 14, 2023
12678cc
Merge pull request #1024 from AngelFP/patch-1
jlnav Jun 14, 2023
f549395
Merge pull request #1021 from Libensemble/bugfix/communityexamplesrepo
jlnav Jun 14, 2023
47d64e9
Do not preinitialize "shared" queue
AngelFP Jun 15, 2023
45036ee
lets see if we can use upstream develop
jlnav Jun 20, 2023
38841d4
does this dodge the deprecation warning?
jlnav Jun 20, 2023
0c4183d
how about this dodge approach?
jlnav Jun 20, 2023
08f0229
bump isort in pre-commit config for support of new python versions. o…
jlnav Jun 21, 2023
0d4768f
Merge pull request #1030 from Libensemble/testing/fix_surmise_test
jlnav Jun 21, 2023
4d4abb3
Merge branch 'develop' into fix/msmpi_only_on_windows
jlnav Jun 21, 2023
83f62ed
Merge pull request #1026 from AngelFP/bug/fix_multiprocessing
jlnav Jun 22, 2023
a2f7852
Merge pull request #1031 from Libensemble/fix/msmpi_only_on_windows
jlnav Jun 26, 2023
a7e2a2b
remove use/mention of pip install libensemble[extras]
jlnav Jun 26, 2023
1aeb53c
Allowing deprecation warnings on ytopt test
jmlarson1 Jun 27, 2023
2403eb1
fix borehole return to dodge ValueError
jlnav Jun 28, 2023
304bb73
Merge branch 'refactor/remove_setup_extras' of https://github.com/Lib…
jlnav Jun 28, 2023
8f66a4f
other tests that trip the 'setting an array element with a sequence' …
jlnav Jun 28, 2023
4d44821
Not setting array with sequence
jmlarson1 Jun 28, 2023
f8f1636
Merge pull request #1035 from Libensemble/refactor/remove_setup_extras
jlnav Jun 29, 2023
f17d233
pin pydantic to previous version
jlnav Jul 3, 2023
531bde0
Merge pull request #1036 from Libensemble/fix/pin_pydantic
jlnav Jul 3, 2023
33cb77c
Running isort
jmlarson1 Jul 11, 2023
fa745f7
Removing decorator
jmlarson1 Jul 11, 2023
9174937
Merge pull request #1039 from Libensemble/formatting/isort
jlnav Jul 11, 2023
8410015
Merge branch 'main' into develop, add +dev
jlnav Jul 11, 2023
62a7e93
Warning on nested MPI (#1025)
jlnav Jul 12, 2023
356155a
Make PyPI use README
shuds13 Jul 14, 2023
2c74edc
Add conda and spack badges to readme
shuds13 Jul 14, 2023
6276954
Remove setuptools from pip dependencies
shuds13 Jul 14, 2023
de8ec21
Refactor 1d_tests (or others?) into new interface (#1029)
jlnav Jul 14, 2023
49db32b
Minor updates to forces simple tutorial
shuds13 Jul 17, 2023
7d95df5
Remove duplicate line in forces gpu
shuds13 Jul 17, 2023
d98900b
Add link from forces tutorial to platform guides
shuds13 Jul 17, 2023
a1144ed
Remove redundant line
shuds13 Jul 17, 2023
8d3aa6a
Update installation deps in docs
shuds13 Jul 20, 2023
c985ba7
Update known issues
shuds13 Jul 20, 2023
7f9d73a
Update Polaris note
shuds13 Jul 20, 2023
b4e7328
Add release notes
shuds13 Jul 20, 2023
2393f07
Update version to 0.10.2
shuds13 Jul 20, 2023
1ddd43b
Add tested systems
shuds13 Jul 20, 2023
8c606e5
Skip Surmise in macOS
shuds13 Jul 20, 2023
e6cf1fa
Merge pull request #1042 from Libensemble/install/setup_changes
shuds13 Jul 21, 2023
757a87a
Update .wci.yml
shuds13 Jul 21, 2023
bb9163f
Slight wording edits
jmlarson1 Jul 21, 2023
973d550
Whitespace
jmlarson1 Jul 21, 2023
979665a
Typo and isort
jmlarson1 Jul 21, 2023
5ec6cff
Update CHANGELOG wording
shuds13 Jul 21, 2023
c76289b
Update release date for 0.10.2
shuds13 Jul 24, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
4 changes: 4 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,10 @@ jobs:
conda env update --file install/gen_deps_environment.yml

pip install ax-platform==0.2.8

- name: Install surmise
if: matrix.os != 'macos-latest' && steps.cache.outputs.cache-hit != 'true'
run: |
pip install --upgrade git+https://github.com/surmising/surmise.git@develop

- name: Build ytopt and dependencies
Expand Down
4 changes: 2 additions & 2 deletions .wci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ description: |
language: Python

release:
version: 0.10.1
date: 2023-07-10
version: 0.10.2
date: 2023-07-24

documentation:
general: https://libensemble.readthedocs.io
Expand Down
45 changes: 32 additions & 13 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,26 @@ GitHub issues are referenced, and can be viewed with hyperlinks on the `github r

.. _`github releases page`: https://github.com/Libensemble/libensemble/releases

Release 0.10.2
--------------

:Date: July 24, 2023

* Fixes issues with workflow directories:
* Ensure relative paths are interpreted from where libEnsemble is run. #1020
* Create intermediate directories for workflow paths. #1017

* Fixes issue where libEnsemble pre-initialized a shared multiprocssing queue. #1026

:Note:

* Tested platforms include Linux, MacOS, Windows and major systems including Frontier (OLCF), Polaris (ALCF), Perlmutter (NERSC), Theta (ALCF) and Bebop. The major system tests ran heterogeneous workflows.

:Known issues:

* On systems using SLURM 23.02, some issues have been experienced when using ``mpi4py`` comms.
* See the known issues section in the documentation for more information (https://libensemble.readthedocs.io/en/main/known_issues.html).

Release 0.10.1
--------------

Expand All @@ -27,7 +47,6 @@ Hotfix for breaking changes in Pydantic.

* See known issues section in the documentation.


Release 0.10.0
--------------

Expand All @@ -36,16 +55,16 @@ Release 0.10.0
New capabilities:

* Enhance portability and simplify the assignment of procs/GPUs to worker resources #928 / #983
* Auto-detect GPUs across systems (inc. Nvidia, AMD, and Intel GPUs).
* Auto-determination of GPU assignment method by MPI runner or provided platform.
* Portable `auto_assign_gpus` / `match_procs_to_gpus` and `num_gpus` arguments added to the MPI executor submit.
* Add `set_to_gpus` function (similar to `set_to_slots`).
* Allow users to specify known systems via option or environment variable.
* Allow users to specify their own system configurations.
* These changes remove a number of tweaks that were needed for particular platforms.
* Auto-detect GPUs across systems (inc. Nvidia, AMD, and Intel GPUs).
* Auto-determination of GPU assignment method by MPI runner or provided platform.
* Portable `auto_assign_gpus` / `match_procs_to_gpus` and `num_gpus` arguments added to the MPI executor submit.
* Add `set_to_gpus` function (similar to `set_to_slots`).
* Allow users to specify known systems via option or environment variable.
* Allow users to specify their own system configurations.
* These changes remove a number of tweaks that were needed for particular platforms.

* Resource management supports GPU and non-GPU simulations in the same ensemble. #993
* User's can specify `num_procs` and `num_gpus` in the generator for each evaluation.
* User's can specify `num_procs` and `num_gpus` in the generator for each evaluation.

* Pydantic models are used for validating major libE input (input can be provided as classes or dictionaries). #878
* Added option to store output and ensemble directories in a workflow directory. #982
Expand All @@ -71,10 +90,10 @@ Documentation:
Tests and Examples:

* Updated forces_gpu tutorial example. #956
* Source code edit is not required for the GPU version.
* Reports whether running on device or host.
* Increases problem size.
* Added versions with persistent generator and multi-task (GPU v non-GPU).
* Source code edit is not required for the GPU version.
* Reports whether running on device or host.
* Increases problem size.
* Added versions with persistent generator and multi-task (GPU v non-GPU).
* Moved multiple tests, generators, and simulators to the community repo.
* Added ytopt example. And updated heFFTe example. #943
* Support Python 3.11 #922
Expand Down
15 changes: 12 additions & 3 deletions README.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
.. image:: docs/images/libEnsemble_Logo.svg
.. image:: https://raw.githubusercontent.com/Libensemble/libensemble/main/docs/images/libE_logo.png
:align: center
:alt: libEnsemble

Expand All @@ -7,6 +7,14 @@
.. image:: https://img.shields.io/pypi/v/libensemble.svg?color=blue
:target: https://pypi.org/project/libensemble

.. image:: https://img.shields.io/conda/v/conda-forge/libensemble?color=blue
:target: https://anaconda.org/conda-forge/libensemble

.. image:: https://img.shields.io/spack/v/py-libensemble?color=blue
:target: https://spack.readthedocs.io/en/latest/package_list.html#py-libensemble

|

.. image:: https://github.com/Libensemble/libensemble/workflows/libEnsemble-CI/badge.svg?branch=main
:target: https://github.com/Libensemble/libensemble/actions

Expand Down Expand Up @@ -73,7 +81,7 @@ Resources
@article{Hudson2022,
title = {{libEnsemble}: A Library to Coordinate the Concurrent
Evaluation of Dynamic Ensembles of Calculations},
author = {Stephen Hudson and Jeffrey Larson and John-Luke Navarro and Stefan Wild},
author = {Stephen Hudson and Jeffrey Larson and John-Luke Navarro and Stefan M. Wild},
journal = {{IEEE} Transactions on Parallel and Distributed Systems},
volume = {33},
number = {4},
Expand All @@ -82,6 +90,7 @@ Resources
doi = {10.1109/tpds.2021.3082815}
}

.. _Community Examples repository: https://github.com/Libensemble/libe-community-examples
.. _conda-forge: https://conda-forge.org/
.. _Contributions: https://github.com/Libensemble/libensemble/blob/main/CONTRIBUTING.rst
.. _docs: https://libensemble.readthedocs.io/en/main/advanced_installation.html
Expand All @@ -90,6 +99,6 @@ Resources
.. _libEnsemble Slack page: https://libensemble.slack.com
.. _MPICH: http://www.mpich.org/
.. _mpmath: http://mpmath.org/
.. _Quickstart: https://libensemble.readthedocs.io/en/main/introduction.html
.. _PyPI: https://pypi.org
.. _Quickstart: https://libensemble.readthedocs.io/en/main/introduction.html
.. _ReadtheDocs: http://libensemble.readthedocs.org/
15 changes: 5 additions & 10 deletions docs/advanced_installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,9 @@ automatically installed alongside libEnsemble:
* Python_ 3.8 or above
* NumPy_
* psutil_
* setuptools_
* pydantic_
* pyyaml_
* tomli_

In view of libEnsemble's compiled dependencies, the following installation
methods each offer a trade-off between convenience and the ability
Expand All @@ -28,13 +29,6 @@ To install the latest PyPI release::

pip install libensemble

The above comes with required dependencies only. To install with some
common user function dependencies (as used in the examples and tests)::

pip install libensemble[extras]

Note that since ``PETSc`` will build from source, this may take a while.

To pip install libEnsemble from the latest develop branch::

python -m pip install --upgrade git+https://github.com/Libensemble/libensemble.git@develop
Expand Down Expand Up @@ -177,15 +171,16 @@ the given system (rather than building from scratch). This may include
``Python`` and the packages distributed with it (e.g., ``numpy``), and will
often include the system MPI library.

.. _GitHub: https://github.com/Libensemble/libensemble
.. _Conda: https://docs.conda.io/en/latest/
.. _conda-forge: https://conda-forge.org/
.. _GitHub: https://github.com/Libensemble/libensemble
.. _MPICH: https://www.mpich.org/
.. _NumPy: http://www.numpy.org
.. _`Open MPI`: https://www.open-mpi.org/
.. _psutil: https://pypi.org/project/psutil/
.. _pydantic: https://pydantic-docs.helpmanual.io/
.. _pyyaml: https://github.com/yaml/pyyaml
.. _Python: http://www.python.org
.. _setuptools: https://setuptools.pypa.io/en/latest/
.. _Spack: https://spack.readthedocs.io/en/latest
.. _spack_libe: https://github.com/Libensemble/spack_libe
.. _tomli: https://github.com/hukkin/tomli
2 changes: 1 addition & 1 deletion docs/function_guides/sim_gen_alloc_api.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ libEnsemble package.

:doc:`See here for more in-depth guides to writing user functions<function_guide_index>`

As of v0.9.3+dev, valid simulator and generator functions
As of v0.10.0, valid simulator and generator functions
can *accept and return a smaller subset of the listed parameters and return values*. For instance,
a ``def my_simulation(one_Input) -> one_Output`` function is now accepted,
as is ``def my_generator(Input, persis_info) -> Output, persis_info``.
Expand Down
2 changes: 1 addition & 1 deletion docs/history_output_logging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ by libEnsemble. If libEnsemble aborts on an exception, these structures are
dumped automatically to these files:

* ``libE_history_at_abort_<sim_count>.npy``
* ``libE_history_at_abort_<sim_count>.pickle``
* ``libE_persis_info_at_abort_<sim_count>.pickle``

To suppress libEnsemble from producing these two files, set ``libE_specs["save_H_and_persis_on_abort"] = False``.

Expand Down
2 changes: 1 addition & 1 deletion docs/introduction_latex.rst
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,6 @@
.. _NLopt documentation: https://nlopt.readthedocs.io/en/latest/NLopt_Installation/
.. _nlopt: https://nlopt.readthedocs.io/en/latest/
.. _NumPy: http://www.numpy.org
.. _Quickstart: https://libensemble.readthedocs.io/en/main/introduction.html
.. _OPAL: http://amas.web.psi.ch/docs/opal/opal_user_guide-1.6.0.pdf
.. _petsc4py: https://bitbucket.org/petsc/petsc4py
.. _PETSc/TAO: http://www.mcs.anl.gov/petsc
Expand All @@ -44,6 +43,7 @@
.. _pytest: https://pypi.org/project/pytest/
.. _Python: http://www.python.org
.. _pyyaml: https://pyyaml.org/
.. _Quickstart: https://libensemble.readthedocs.io/en/main/introduction.html
.. _ReadtheDocs: http://libensemble.readthedocs.org/
.. _SciPy: http://www.scipy.org
.. _scipy.optimize: https://docs.scipy.org/doc/scipy/reference/optimize.html
Expand Down
14 changes: 11 additions & 3 deletions docs/known_issues.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,9 +4,15 @@ Known Issues
The following selection describes known bugs, errors, or other difficulties that
may occur when using libEnsemble.

* As of 10/13/2022, on Perlmutter there was an issue running concurrent applications
on a node, following a recent system update. This also affects previous versions
of libEnsemble, and is being investigated.
* Platforms using SLURM version 23.02 experience a `pickle error`_ when using
``mpi4py`` comms. Disabling matching probes via the environment variable
``export MPI4PY_RC_RECV_MPROBE=0`` or adding ``mpi4py.rc.recv_mprobe = False``
at the top of the calling script should resolve this error. If using the MPI
executor and multiple workers per node, some users may experience failed
applications with the message
``srun: error: CPU binding outside of job step allocation, allocated`` in
the application's standard error. This is being investigated. If this happens
we recommend using ``local`` comms in place of ``mpi4py``.
* When using the Executor: OpenMPI does not work with direct MPI task
submissions in mpi4py comms mode, since OpenMPI does not support nested MPI
executions. Use either ``local`` mode or the Balsam Executor instead.
Expand All @@ -23,3 +29,5 @@ may occur when using libEnsemble.
:doc:`FAQ<FAQ>` for more information.
* We currently recommended running in Central mode on Bridges as distributed
runs are experiencing hangs.

.. _pickle error: https://docs.nersc.gov/development/languages/python/using-python-perlmutter/#missing-support-for-matched-proberecv
7 changes: 5 additions & 2 deletions docs/platforms/polaris.rst
Original file line number Diff line number Diff line change
Expand Up @@ -42,12 +42,15 @@ for installing libEnsemble, including using Spack.
Ensuring use of mpiexec
-----------------------

If using the :doc:`MPIExecutor<../executor/mpi_executor>` it is recommended to
ensure you are using ``mpiexec`` instead of ``aprun``. When setting up the executor use::
Prior to libE v 0.10.0, when using the :doc:`MPIExecutor<../executor/mpi_executor>` it
is necessary to manually tell libEnsemble to use``mpiexec`` instead of ``aprun``.
When setting up the executor use::

from libensemble.executors.mpi_executor import MPIExecutor
exctr = MPIExecutor(custom_info={'mpi_runner':'mpich', 'runner_name':'mpiexec'})

From version 0.10.0, this is not necessary.

Job Submission
--------------

Expand Down
2 changes: 1 addition & 1 deletion docs/platforms/srun.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Note on Resource Binding
------------------------

.. note::
Update: From version version 0.10.0, it is recommended that GPUs are assigned
Update: From version 0.10.0, it is recommended that GPUs are assigned
automatically by libEnsemble. See the :doc:`forces_gpu<../tutorials/forces_gpu_tutorial>`
tutorial as an example.

Expand Down
56 changes: 26 additions & 30 deletions docs/tutorials/executor_forces_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,31 +2,23 @@
Executor with Electrostatic Forces
==================================

This tutorial highlights libEnsemble's capability to execute
This tutorial highlights libEnsemble's capability to portably execute
and monitor external scripts or user applications within simulation or generator
functions using the :doc:`executor<../executor/overview>`. In this tutorial,
our calling script registers a compiled executable that simulates
functions using the :doc:`executor<../executor/overview>`.

This tutorial's calling script registers a compiled executable that simulates
electrostatic forces between a collection of particles. The simulator function
launches instances of this executable and reads output files to determine
if the run was successful.

It is possible to use ``subprocess`` calls from Python to issue
commands such as ``jsrun`` or ``aprun`` to run applications. Unfortunately,
hard-coding such commands within user scripts isn't portable.
Furthermore, many systems like Argonne's :doc:`Theta<../platforms/theta>` do not
allow libEnsemble to submit additional tasks from the compute nodes. On these
systems, a proxy launch mechanism (such as Balsam) is required.
libEnsemble's Executors were developed to directly address such issues.

In particular, we'll be experimenting with
libEnsemble's :doc:`MPI Executor<../executor/mpi_executor>`, since it can automatically
detect available MPI runners and resources, and by default divide them equally among workers.
This tutorial uses libEnsemble's :doc:`MPI Executor<../executor/mpi_executor>`,
which automatically detects available MPI runners and resources.

Getting Started
---------------

The simulation source code ``forces.c`` can be obtained directly from the
libEnsemble repository here_.
libEnsemble repository in the forces_app_ directory.

Assuming MPI and its C compiler ``mpicc`` are available, compile
``forces.c`` into an executable (``forces.x``) with:
Expand All @@ -35,9 +27,14 @@ Assuming MPI and its C compiler ``mpicc`` are available, compile

$ mpicc -O3 -o forces.x forces.c -lm

Alternative build lines for different platforms can be found in the ``build_forces.sh``
file in the same directory.

Calling Script
--------------

Complete scripts for this example can be found in the forces_simple_ directory.

Let's begin by writing our calling script to parameterize our simulation and
generation functions and call libEnsemble. Create a Python file called `run_libe_forces.py`
containing:
Expand Down Expand Up @@ -66,13 +63,11 @@ containing:
sim_app = os.path.join(os.getcwd(), "../forces_app/forces.x")
exctr.register_app(full_path=sim_app, app_name="forces")

On line 15, we instantiate our :doc:`MPI Executor<../executor/mpi_executor>` class instance,
which can optionally be customized by specifying alternative MPI runners. The
auto-detected default should be sufficient.
On line 15, we instantiate our :doc:`MPI Executor<../executor/mpi_executor>`.

Registering an application is as easy as providing the full file-path and giving
it a memorable name. This Executor instance will later be retrieved within our
simulation function to launch the registered app.
it a memorable name. This Executor will later be used within our simulation
function to launch the registered app.

Next define the :ref:`sim_specs<datastruct-sim-specs>` and
:ref:`gen_specs<datastruct-gen-specs>` data structures. Recall that these
Expand Down Expand Up @@ -225,12 +220,10 @@ for starters:

We retrieve the generated number of particles from ``H`` and construct
an argument string for our launched application. The particle count doubles up
as a random number seed here. Note a fourth argument can be added to forces
that gives forces a chance of a "bad run" (a float between 0 and 1), but
for now that will default to zero.
as a random number seed here.

We then retrieve our previously instantiated Executor instance from the
class definition, where it was automatically stored as an attribute.
We then retrieve our previously instantiated Executor from the class definition,
where it was automatically stored as an attribute.

After submitting the "forces" app for execution,
a :ref:`Task<task_tag>` object is returned that correlates with the launched app.
Expand Down Expand Up @@ -283,10 +276,10 @@ This completes our calling script and simulation function. Run libEnsemble with:

$ python run_libe_forces.py --comms local --nworkers [nworkers]

This may take up to a minute to complete. Output files---including ``forces.stat``
and files containing ``stdout`` and ``stderr`` content for each task---should
appear in the current working directory. Overall workflow information
should appear in ``libE_stats.txt`` and ``ensemble.log`` as usual.
Output files---including ``forces.stat`` and files containing ``stdout`` and
``stderr`` content for each task---should appear in the current working
directory. Overall workflow information should appear in ``libE_stats.txt``
and ``ensemble.log`` as usual.

For example, my ``libE_stats.txt`` resembled::

Expand Down Expand Up @@ -338,6 +331,8 @@ Each of these example files can be found in the repository in `examples/tutorial
For further experimentation, we recommend trying out this libEnsemble tutorial
workflow on a cluster or multi-node system, since libEnsemble can also manage
those resources and is developed to coordinate computations at huge scales.
See ref:`HPC platform guides<platform-index>` for more information.

Please feel free to contact us or open an issue on GitHub_ if this tutorial
workflow doesn't work properly on your cluster or other compute resource.

Expand Down Expand Up @@ -386,6 +381,7 @@ These may require additional browsing of the documentation to complete.

...

.. _here: https://raw.githubusercontent.com/Libensemble/libensemble/main/libensemble/tests/scaling_tests/forces/forces.c
.. _forces_app: https://github.com/Libensemble/libensemble/tree/main/libensemble/tests/scaling_tests/forces/forces_app
.. _forces_simple: https://github.com/Libensemble/libensemble/tree/main/libensemble/tests/scaling_tests/forces/forces_simple
.. _examples/tutorials/forces_with_executor: https://github.com/Libensemble/libensemble/tree/develop/examples/tutorials/forces_with_executor
.. _GitHub: https://github.com/Libensemble/libensemble/issues