Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
36 commits
Select commit Hold shift + click to select a range
5966164
Add release notes for v1.5.0
shuds13 Apr 7, 2025
31a6069
Update version and year
shuds13 Apr 7, 2025
2dc5b4e
Use present tense in release notes
shuds13 Apr 7, 2025
178253b
Edit docstring
jmlarson1 Apr 8, 2025
36f98a6
Whitespace
jmlarson1 Apr 8, 2025
c37db76
Underlines too short
jmlarson1 Apr 8, 2025
d9a91c7
Replacing ~ with -
jmlarson1 Apr 8, 2025
165c668
Link update
jmlarson1 Apr 8, 2025
c9ca535
Bypass label in include
shuds13 Apr 8, 2025
71fa984
More docs edits
jmlarson1 Apr 8, 2025
f713522
nb-clean to ipynb files
jmlarson1 Apr 8, 2025
7366dd8
Merge branch 'release/v_1.5.0' of https://github.com/Libensemble/libe…
jmlarson1 Apr 8, 2025
d5dcd52
Update CHANGELOG.rst
jmlarson1 Apr 8, 2025
5d1278f
mono
jmlarson1 Apr 8, 2025
df05761
Merge branch 'release/v_1.5.0' of https://github.com/Libensemble/libe…
jmlarson1 Apr 8, 2025
db26ac9
small fixes - adjust version in pyproject.toml, fix pydantic version …
jlnav Apr 9, 2025
1a9f341
Add LUMI support (#1546)
shuds13 Apr 9, 2025
cbabf5a
Update release notes
shuds13 Apr 9, 2025
877f52b
make version dynamic in pyproject.toml
jlnav Apr 9, 2025
21f6175
Revert "make version dynamic in pyproject.toml"
shuds13 Apr 9, 2025
fa951d7
libE_specs *may* always have a workflow_dir_path, as we're getting th…
jlnav Apr 9, 2025
714e7e5
__all__ in tasmanian doesnt need definition if we'll never do: from l…
jlnav Apr 9, 2025
b57d80c
remove lumi prints in platforms.py
jlnav Apr 9, 2025
cd178e3
Update gpCAM notebook
shuds13 Apr 9, 2025
79bcd37
Replace flashing live animation with clean post-run version
shuds13 Apr 10, 2025
34339f4
add unit test to ensure persis_info being None gets set to empty dict…
jlnav Apr 10, 2025
d30259e
enable coverage for ax and gpcam, disable coverage for tasmanian
jlnav Apr 10, 2025
deb58af
Updating input fields
jmlarson1 Apr 10, 2025
026371c
Add grid lines and up opacity
shuds13 Apr 10, 2025
f2ce50d
Merge pull request #1548 from Libensemble/examples/aposmm_nb_animation
shuds13 Apr 10, 2025
8da45e8
Add back gpcam colab import line
shuds13 Apr 10, 2025
ced94fc
Revert ensemble.py to release/v_1.5.0 version
shuds13 Apr 10, 2025
edfa3c1
Keep __all__ lines for docs
shuds13 Apr 10, 2025
8156083
Merge pull request #1547 from Libensemble/testing/some_coverage
shuds13 Apr 10, 2025
58735ab
Set date for release 1.5.0
shuds13 Apr 10, 2025
5e040d1
Merge branch 'develop' into release/v_1.5.0
shuds13 Apr 10, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions .codecov.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,5 +3,4 @@ ignore:
- "libensemble/tools/forkable_pdb.py"
- "libensemble/tools/live_data/*"
- "libensemble/sim_funcs/executor_hworld.py"
- "libensemble/gen_funcs/persistent_ax_multitask.py"
- "libensemble/gen_funcs/persistent_gpCAM.py"
- "libensemble/gen_funcs/persistent_tasmanian.py"
4 changes: 2 additions & 2 deletions .wci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,8 +16,8 @@ description: |
language: Python

release:
version: 1.4.3
date: 2024-12-16
version: 1.5.0
date: 2025-04-10

documentation:
general: https://libensemble.readthedocs.io
Expand Down
40 changes: 40 additions & 0 deletions CHANGELOG.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,46 @@ GitHub issues are referenced, and can be viewed with hyperlinks on the `github r

.. _`github releases page`: https://github.com/Libensemble/libensemble/releases

Release 1.5.0
--------------

:Date: Apr 10, 2025

General Updates:

* Migrate package build system to `pyproject.toml` (with `pixi` support). #1459
* Improve handling when no MPI found. #1514
* `ensemble.save_output()` can save without appending attributes `append_attrs=False`. #1531
* Improve handling of worker-specific `persis_info` fields when they are not initially provided. #1531
* Bugfix: Fix `final_gen_send` when there are no worker-specific `persis_info` fields.
* Handle worker-generated `persis_info` fields.
* Ensure `persis_info` is initialized to an empty dictionary in user functions instead of `None`.

Examples:

* Update Ax generator for `Ax v0.5.0`. #1508
* Rename gpCAM generators. #1516
* `persistent_gpCAM_ask_tell` to `persistent_gpCAM`
* `persistent_gpCAM_simple` to `persistent_gpCAM_covar` (in fact less simple)
* Persistent generators return `None` as first return value unless `H_o` is updated. #1515
* Add LUMI to known platforms. #1546

Documentation:

* Revamp Examples and HPC section of documentation. #1501, #1536, #1539
* Add tutorial and notebook demonstrating surrogate model creation with gpCAM. #1531
* Update Aurora guide. #1510
* Update and documented APOSMM/WarpX example. #1543

:Note:

* Tests were run on Linux and MacOS with Python versions 3.10, 3.11, 3.12, 3.13
* Heterogeneous workflows tested on Aurora (ALCF), Polaris (ALCF), LUMI (EuroHPC JU), and Perlmutter (NERSC).

:Known Issues:

* See known issues section in the documentation.

Release 1.4.3
--------------

Expand Down
2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
BSD 3-Clause License

Copyright (c) 2018-2024, UChicago Argonne, LLC and the libEnsemble Development Team
Copyright (c) 2018-2025, UChicago Argonne, LLC and the libEnsemble Development Team
All Rights Reserved.

Redistribution and use in source and binary forms, with or without
Expand Down
2 changes: 1 addition & 1 deletion docs/advanced_installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ automatically installed alongside libEnsemble:
* Python_ ``>= 3.10``
* NumPy_ ``>= 1.21``
* psutil_ ``>= 5.9.4``
* `pydantic`_ ``<= 1.10.12``
* `pydantic`_ ``>= 1.10.12``
* pyyaml_ ``>= v6.0``
* tomli_ ``>= 1.2.1``

Expand Down
6 changes: 3 additions & 3 deletions docs/examples/sim_funcs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Ideal for simple debugging of generator processes or system testing.
Borehole function with kills <sim_funcs/borehole_kills>
Chwirut1 vector-valued function <sim_funcs/chwirut1>
Inverse Bayesian likelihood <sim_funcs/inverse_bayes>
Norm <sim_funcs/simple_sim>
Norm <sim_funcs/simple_sim>
Rosenbrock test optimization function <sim_funcs/rosenbrock>
Six Hump Camel <sim_funcs/six_hump_camel>
Test noisy function <sim_funcs/noisy_vector_mapping>
Expand All @@ -36,8 +36,8 @@ Functions that run user applications
These use the executor to launch applications and in some cases
handle dynamic CPU/GPU allocation.

The ``Variable resources`` module contains basic examples, while the ``Template``
examples use a simple MPI/OpenMP (with GPU offload option) application (``forces``)
The ``Variable resources`` module contains basic examples, while the ``Template``
examples use a simple MPI/OpenMP (with GPU offload option) application (``forces``)
to demonstrate libEnsemble’s capabilities on various HPC systems. The
build_forces.sh_ file gives compile lines for building the simple ``forces``
application on various platforms (use -DGPU to build for GPU).
Expand Down
2 changes: 1 addition & 1 deletion docs/examples/sim_funcs/forces_simf_gpu.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Template for GPU executables
Template for GPU executables
----------------------------

.. role:: underline
Expand Down
5 changes: 3 additions & 2 deletions docs/examples/sim_funcs/forces_simf_gpu_multi_app.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ ranks and GPU resources as requested by the generator.
This makes efficient use of each node as the expensive GPU simulations will use the GPUs on
the node/s, while the rest of the CPU cores are assigned to the simple CPU-only simulations.

For a realistic use-case see https://journals.aps.org/prab/abstract/10.1103/PhysRevAccelBeams.26.084601
See this publication_ for a real-world demonstration of these capabilities.

.. automodule:: forces_multi_app.forces_simf
:members:
Expand Down Expand Up @@ -39,5 +39,6 @@ up by each worker and these will be used when the simulation is run, unless over
More information is available in the :doc:`Forces GPU tutorial <../../tutorials/forces_gpu_tutorial>`
and the video_ demonstration on Frontier_.

.. _video: https://www.youtube.com/watch?v=H2fmbZ6DnVc
.. _Frontier: https://docs.olcf.ornl.gov/systems/frontier_user_guide.html
.. _publication: https://doi.org/10.1103/PhysRevAccelBeams.26.084601
.. _video: https://www.youtube.com/watch?v=H2fmbZ6DnVc
4 changes: 4 additions & 0 deletions docs/examples/submission_scripts.rst
Original file line number Diff line number Diff line change
@@ -1 +1,5 @@
.. include:: ../platforms/example_scripts.rst
:end-before: .. _slurm_mpi_distributed:

.. include:: ../platforms/example_scripts.rst
:start-after: .. _slurm_mpi_distributed:
8 changes: 3 additions & 5 deletions docs/platforms/example_scripts.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,18 +7,17 @@ for more information about the respective systems and configuration.

.. note::
It is **highly recommended** that the directive lines (e.g., #SBATCH) in batch
submission scripts do **NOT** specify processor, task, or GPU configuration info
--- these lines should only specify the number of nodes required.
submission scripts do **NOT** specify processor, task, or GPU configuration
information---these lines should only specify the number of nodes required.

For example, do not specify ``#SBATCH --gpus-per-node=4`` in order to use four
GPUs on the node, when each worker may use less than this, as this may assign
all of the GPUs to a single MPI invocation. Instead, the configuration should
all of the GPUs to a single MPI invocation. Instead, the configuration should
be supplied either
:doc:`in the simulation function<../examples/sim_funcs/forces_simf_gpu>`
or, if using dynamic resources,
:doc:`in the generator<../examples/sim_funcs/forces_simf_gpu_vary_resources>`.


General examples
----------------

Expand All @@ -43,7 +42,6 @@ LSF - Basic
:caption: /examples/libE_submission_scripts/submit_lsf_simple.sh
:language: bash


System Examples
---------------

Expand Down
22 changes: 6 additions & 16 deletions docs/platforms/platforms_index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -46,14 +46,12 @@ which runs the generator on the manager (using a thread) as below.

A SLURM batch script may include:


.. code-block:: bash

#SBATCH --nodes 3

python run_libe_forces.py --nworkers 3


When using **gen_on_manager**, set ``nworkers`` to the number of workers desired for running simulations.

Dedicated Mode
Expand All @@ -64,7 +62,6 @@ True, the MPI executor will not launch applications on nodes where libEnsemble P
processes (manager and workers) are running. Workers launch applications onto the
remaining nodes in the allocation.


.. list-table::
:widths: 60 40

Expand All @@ -84,29 +81,27 @@ remaining nodes in the allocation.

A SLURM batch script may include:


.. code-block:: bash

#SBATCH --nodes 3

python run_libe_forces.py --nworkers 3


Note that **gen_on_manager** is not set in the above example.

Distributed Running
--------------------
Distributed Running
-------------------

In the **distributed** approach, libEnsemble can be run using the **mpi4py**
communicator, with workers distributed across nodes. This is most often used
when workers run simulation code directly, via a Python interface. The user
script is invoked with an MPI runner, for example (using an `mpich` based MPI)::
script is invoked with an MPI runner, for example (using an `mpich`-based MPI)::

mpirun -np 4 -ppn 1 python myscript.py

The distributed approach, can also be used with the executor, to co-locate workers
with the applications they submit. To ensure workers are placed as required in this
case, requires :ref:`a careful MPI rank placement <slurm_mpi_distributed>`.
with the applications they submit. Ensuring that workers are placed as required in this
case requires :ref:`a careful MPI rank placement <slurm_mpi_distributed>`.

.. image:: ../images/distributed_new_detailed.png
:alt: distributed
Expand All @@ -116,7 +111,6 @@ case, requires :ref:`a careful MPI rank placement <slurm_mpi_distributed>`.
This allows the libEnsemble worker to read files produced by the application on
local node storage.


Configuring the Run
-------------------

Expand All @@ -140,7 +134,7 @@ and partitions these to workers. The :doc:`MPI Executor<../executor/mpi_executor
accesses the resources available to the current worker when launching tasks.

Zero-resource workers
~~~~~~~~~~~~~~~~~~~~~
---------------------

Users with persistent ``gen_f`` functions may notice that the persistent workers
are still automatically assigned system resources. This can be resolved by using
Expand All @@ -159,7 +153,6 @@ Varying resources
libEnsemble also features :ref:`dynamic resource assignment<var-resources-gpu>`, whereby the
number of processes and/or the number of GPUs can be a set for each simulation by the generator.


Overriding Auto-Detection
-------------------------

Expand All @@ -172,8 +165,6 @@ libE_specs option.
When using the MPI Executor, it is possible to override the detected information using the
`custom_info` argument. See the :doc:`MPI Executor<../executor/mpi_executor>` for more.



Systems with Launch/MOM Nodes
-----------------------------

Expand Down Expand Up @@ -212,7 +203,6 @@ or *to entirely different systems*.
Submission scripts for running on launch/MOM nodes and for using Balsam can be found in
the :doc:`examples<example_scripts>`.


.. _globus_compute_ref:

Globus Compute - Remote User Functions
Expand Down
8 changes: 4 additions & 4 deletions docs/resource_manager/resource_detection.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,17 @@ LSF LSB_HOSTS/LSB_MCPU_HOSTS
PBS PBS_NODEFILE
=========== ===========================

These environment variable names can be modified via the :ref:`resource_info<resource_info>`
These environment variable names can be modified via the :ref:`resource_info<resource_info>`
:class:`libE_specs<libensemble.specs.LibeSpecs>` option.

On other systems you may have to supply a node list in a file called **node_list**
in your run directory. For example, on ALCF system Cooley_, the session node list
On other systems, you may have to supply a node list in a file called **node_list**
in your run directory. For example, on the ALCF system Cooley_, the session node list
can be obtained as follows::

cat $COBALT_NODEFILE > node_list

Resource detection can be disabled by setting
``libE_specs["disable_resource_manager"] = True``, and users can simply supply run
``libE_specs["disable_resource_manager"] = True``, and users can supply run
configuration options on the Executor submit line.

This will usually work sufficiently on
Expand Down
2 changes: 1 addition & 1 deletion docs/resource_manager/zero_resource_workers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ concurrency desired by the ensemble, taking into account generators and simulato

Users can set generator resources using the *libE_specs* options
``gen_num_procs`` and/or ``gen_num_gpus``, which take integer values.
If only ``gen_num_gpus`` is set, then the number of processors is set to match.
If only ``gen_num_gpus`` is set, then the number of processors is set to match.

To vary generator resources, ``persis_info`` settings can be used in allocation
functions before calling the ``gen_work`` support function. This takes the
Expand Down
6 changes: 2 additions & 4 deletions docs/tutorials/forces_gpu_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,9 @@ to the GPU. The libEnsemble scripts in this example are available under
forces_gpu_ in the libEnsemble repository.

This example is based on the
:doc:`simple forces tutorial <../tutorials/executor_forces_tutorial>` with
:doc:`simple forces tutorial <../tutorials/executor_forces_tutorial>` with
a slightly modified simulation function (to assign GPUs) and a greatly increased
number of particles (allows live GPU usage to be viewed).
number of particles (to allow real-time GPU usage to be viewed).

In the first example, each worker will be using one GPU. The code will assign the
GPUs available to each worker, using the appropriate method. This works on systems
Expand All @@ -35,7 +35,6 @@ from the simple forces example are highlighted:
# Optional - to print GPU settings
from libensemble.tools.test_support import check_gpu_setting


def run_forces(H, persis_info, sim_specs, libE_info):
"""Launches the forces MPI app and auto-assigns ranks and GPU resources.

Expand Down Expand Up @@ -154,7 +153,6 @@ and use this information however you want.
output = np.zeros(1, dtype=sim_specs["out"])
output["energy"][0] = final_energy


return output

The above code will assign a GPU to each worker on CUDA-capable systems,
Expand Down
8 changes: 4 additions & 4 deletions docs/tutorials/gpcam_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ In each iteration, a batch of points is produced for concurrent evaluation, maxi
Ensure that libEnsemble, and gpCAM are installed via: ``pip install libensemble gpcam``

Generator function
-----------------
------------------

The gpCAM generator function is called ``persistent_gpCAM``.

Expand Down Expand Up @@ -179,7 +179,7 @@ For running applications using parallel resources in the simulator see the `forc
return term1 + term2 + term3

Calling Script
-------------
--------------

Our calling script configures libEnsemble, the generator function, and the simulator function. It then create the ensemble object and runs the ensemble.

Expand Down Expand Up @@ -275,7 +275,7 @@ At the end of our calling script we run the ensemble.
pprint(H[["sim_id", "x", "f"]][:16]) # See first 16 results

Rerun and test model at known points
-----------------------------------
------------------------------------

To see how the accuracy of the surrogate model improves, we can use previously evaluated points as test points and run again with a different seed.

Expand All @@ -292,7 +292,7 @@ To see how the accuracy of the surrogate model improves, we can use previously e
print(persis_info)

Viewing model progression
------------------------
-------------------------

Now we can check how our model's values compared against the values at known test points as the ensemble progresses.
The comparison is based on the **mean squared error** between the gpCAM model and our known
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
#PBS -A [project]
#PBS -N libE_example


cd $PBS_O_WORKDIR
# Choose MPI backend. Note that the built mpi4py in your environment should match.
module load oneapi/mpi
Expand Down
Loading
Loading