Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/FAQ.rst
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ Common Errors

.. dropdown:: **PETSc and MPI errors with "[unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=59"**

with ``python [test with PETSc].py --comms local --nworkers 4``
with ``python [test with PETSc].py --nworkers 4``

This error occurs on some platforms when using PETSc with libEnsemble
in ``local`` (multiprocessing) mode. We believe this is due to PETSc initializing MPI
Expand Down
2 changes: 1 addition & 1 deletion docs/platforms/bebop.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ Once in the interactive session, you may need to reload your modules::

Now run your script with four workers (one for generator and three for simulations)::

python my_libe_script.py --comms local --nworkers 4
python my_libe_script.py --nworkers 4

``mpirun`` should also work. This line launches libEnsemble with a manager and
**three** workers to one allocated compute node, with three nodes available for
Expand Down
2 changes: 1 addition & 1 deletion docs/platforms/frontier.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ Now grab an interactive session on one node::

Then in the session run::

python run_libe_forces.py --comms local --nworkers 9
python run_libe_forces.py --nworkers 9

This places the generator on the first worker and runs simulations on the
others (each simulation using one GPU).
Expand Down
2 changes: 1 addition & 1 deletion docs/platforms/improv.rst
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ Once in the interactive session, you may need to reload the modules::

Now run forces with five workers (one for generator and four for simulations)::

python run_libe_forces.py --comms local --nworkers 5
python run_libe_forces.py --nworkers 5

mpi4py comms
============
Expand Down
2 changes: 1 addition & 1 deletion docs/platforms/platforms_index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ of the allocation::

or::

python myscript.py --comms local --nworkers 4
python myscript.py --nworkers 4

Either of these will run libEnsemble (inc. manager and 4 workers) on the first node. The remaining
4 nodes will be divided among the workers for submitted applications. If the same run was
Expand Down
2 changes: 1 addition & 1 deletion docs/platforms/polaris.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ A simple example batch script for a libEnsemble use case that runs 5 workers

cd $PBS_O_WORKDIR

python run_libe_forces.py --comms local --nworkers 5
python run_libe_forces.py --nworkers 5

The script can be run with::

Expand Down
2 changes: 1 addition & 1 deletion docs/resource_manager/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ Also, this can be set on the command line as a convenience.

.. code-block:: bash

python run_ensemble.py --comms local --nworkers 5 --nresource_sets 8
python run_ensemble.py --nworkers 5 --nresource_sets 8

.. _persistent_sampling_var_resources.py: https://github.com/Libensemble/libensemble/blob/develop/libensemble/gen_funcs/persistent_sampling_var_resources.py
.. _test_GPU_variable_resources.py: https://github.com/Libensemble/libensemble/blob/develop/libensemble/tests/regression_tests/test_GPU_variable_resources.py
Expand Down
4 changes: 2 additions & 2 deletions docs/resource_manager/zero_resource_workers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ only run ``gen_f`` functions in-place (i.e., they do not use the Executor
to submit applications to allocated nodes). Suppose the user is using the
:meth:`parse_args()<tools.parse_args>` function and runs::

python run_ensemble_persistent_gen.py --comms local --nworkers 3
python run_ensemble_persistent_gen.py --nworkers 3

If three nodes are available in the node allocation, the result may look like the
following.
Expand All @@ -21,7 +21,7 @@ following.

To avoid the the wasted node above, add an extra worker::

python run_ensemble_persistent_gen.py --comms local --nworkers 4
python run_ensemble_persistent_gen.py --nworkers 4

and in the calling script (*run_ensemble_persistent_gen.py*), explicitly set the
number of resource sets to the number of workers that will be running simulations.
Expand Down
2 changes: 1 addition & 1 deletion docs/running_libE.rst
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ supercomputers.
or an :class:`Ensemble<libensemble.ensemble.Ensemble>` object with ``Ensemble(parse_args=True)``,
you can specify these on the command line::

python myscript.py --comms local --nworkers N
python myscript.py --nworkers N

This will launch one manager and ``N`` workers.

Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/aposmm_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ busy.
In practice, since a single worker becomes "persistent" for APOSMM, users
should initiate one more worker than the number of parallel simulations::

python my_aposmm_routine.py --comms local --nworkers 4
python my_aposmm_routine.py --nworkers 4

results in three workers running simulations and one running APSOMM.

Expand Down Expand Up @@ -265,7 +265,7 @@ optimization method::

Finally, run this libEnsemble / APOSMM optimization routine with the following::

python my_first_aposmm.py --comms local --nworkers 4
python my_first_aposmm.py --nworkers 4

Please note that one worker will be "persistent" for APOSMM for the duration of
the routine.
Expand Down
8 changes: 4 additions & 4 deletions docs/tutorials/executor_forces_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -211,7 +211,7 @@ This completes our calling script and simulation function. Run libEnsemble with:

.. code-block:: bash

python run_libe_forces.py --comms local --nworkers [nworkers]
python run_libe_forces.py --nworkers [nworkers]

where ``nworkers`` is one more than the number of concurrent simulations.

Expand All @@ -226,7 +226,7 @@ and ``ensemble.log`` as usual.

.. code-block:: bash

python run_libe_forces.py --comms local --nworkers 3
python run_libe_forces.py --nworkers 3

my ``libE_stats.txt`` resembled::

Expand Down Expand Up @@ -362,13 +362,13 @@ E.g., Instead of:

.. code-block:: bash

python run_libe_forces.py --comms local --nworkers 5
python run_libe_forces.py --nworkers 5

use:

.. code-block:: bash

python run_libe_forces.py --comms local --nworkers 4
python run_libe_forces.py --nworkers 4

Note that as the generator random number seed will be zero instead of one, the checksum will change.

Expand Down
4 changes: 2 additions & 2 deletions docs/tutorials/forces_gpu_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -214,7 +214,7 @@ nine workers (the extra worker runs the persistent generator).

For example::

python run_libe_forces.py --comms local --nworkers 9
python run_libe_forces.py --nworkers 9

See :ref:`zero-resource workers<zero_resource_workers>` for more ways to express this.

Expand Down Expand Up @@ -298,7 +298,7 @@ that runs 8 workers on 2 nodes:
export MPICH_GPU_SUPPORT_ENABLED=1
export SLURM_EXACT=1

python run_libe_forces.py --comms local --nworkers 9
python run_libe_forces.py --nworkers 9

where ``SLURM_EXACT`` is set to help prevent resource conflicts on each node.

Expand Down
4 changes: 2 additions & 2 deletions examples/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,9 @@ If you wish to clone libEnsemble to try the examples instead of installing from
pip install -e .
cd libensemble/tests/regression_tests

Any of the tests can be run similarly to the following::
Any of the tests can be run similarly to the following (-n is also short for --nworkers)::

python test_uniform_sampling.py --comms local --nworkers 3
python test_uniform_sampling.py --nworkers 3

The command line arguments are parsed by a ``parse_args`` module within each of the scripts. If you
have ``mpi4py`` installed you can alternatively run with::
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_1d_sampling_from_yaml.py
python test_1d_sampling_from_yaml.py --nworkers 3 --comms local
python test_1d_sampling_from_yaml.py --nworkers 3
python test_1d_sampling_from_yaml.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_1d_sampling_with_profile.py
python test_1d_sampling_with_profile.py --nworkers 3 --comms local
python test_1d_sampling_with_profile.py --nworkers 3
python test_1d_sampling_with_profile.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
2 changes: 1 addition & 1 deletion libensemble/tests/functionality_tests/test_1d_splitcomm.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_1d_sampling.py
python test_1d_sampling.py --nworkers 3 --comms local
python test_1d_sampling.py --nworkers 3
python test_1d_sampling.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
2 changes: 1 addition & 1 deletion libensemble/tests/functionality_tests/test_1d_subcomm.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_1d_sampling.py
python test_1d_sampling.py --nworkers 3 --comms local
python test_1d_sampling.py --nworkers 3
python test_1d_sampling.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_1d_sampling.py
python test_1d_sampling.py --nworkers 3 --comms local
python test_1d_sampling.py --nworkers 3
python test_1d_sampling.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

Execute via one of the following commands (e.g., 4 workers):
mpiexec -np 5 python test_GPU_gen_resources.py
python test_GPU_gen_resources.py --comms local --nworkers 4
python test_GPU_gen_resources.py --nworkers 4

When running with the above command, the number of concurrent evaluations of
the objective function will be 4, as one of the five workers will be the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_6-hump_camel_active_persistent_worker_abort.py
python test_6-hump_camel_active_persistent_worker_abort.py --nworkers 3 --comms local
python test_6-hump_camel_active_persistent_worker_abort.py --nworkers 3
python test_6-hump_camel_active_persistent_worker_abort.py --nworkers 3 --comms tcp

When running with the above commands, the number of concurrent evaluations of
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_calc_exception.py
python test_calc_exception.py --nworkers 3 --comms local
python test_calc_exception.py --nworkers 3
python test_calc_exception.py --nworkers 3 --comms tcp
"""

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_cancel_in_alloc.py
python test_cancel_in_alloc.py --nworkers 3 --comms local
python test_cancel_in_alloc.py --nworkers 3
python test_cancel_in_alloc.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
2 changes: 1 addition & 1 deletion libensemble/tests/functionality_tests/test_comms.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_comms.py
python test_comms.py --nworkers 3 --comms local
python test_comms.py --nworkers 3
python test_comms.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be N-1.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_6-hump_camel_elapsed_time_abort.py
python test_6-hump_camel_elapsed_time_abort.py --nworkers 3 --comms local
python test_6-hump_camel_elapsed_time_abort.py --nworkers 3
python test_6-hump_camel_elapsed_time_abort.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_evaluate_existing_sample.py
python test_evaluate_existing_sample.py --nworkers 3 --comms local
python test_evaluate_existing_sample.py --nworkers 3
python test_evaluate_existing_sample.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_executor_hworld.py
python test_executor_hworld.py --nworkers 3 --comms local
python test_executor_hworld.py --nworkers 3
python test_executor_hworld.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_executor_hworld.py
python test_executor_hworld.py --nworkers 3 --comms local
python test_executor_hworld.py --nworkers 3
python test_executor_hworld.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_executor_hworld.py
python test_executor_hworld.py --nworkers 3 --comms local
python test_executor_hworld.py --nworkers 3
python test_executor_hworld.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
2 changes: 1 addition & 1 deletion libensemble/tests/functionality_tests/test_fast_alloc.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_fast_alloc.py
python test_fast_alloc.py --nworkers 3 --comms local
python test_fast_alloc.py --nworkers 3

The number of concurrent evaluations of the objective function will be 4-1=3.
"""
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@

Execute via one of the following commands (e.g. 5 workers):
mpiexec -np 6 python test_mpi_gpu_settings.py
python test_mpi_gpu_settings.py --comms local --nworkers 5
python test_mpi_gpu_settings.py --nworkers 5

When running with the above command, the number of concurrent evaluations of
the objective function will be 4, as one of the five workers will be the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@

Execute via one of the following commands (e.g., 5 workers):
mpiexec -np 6 python test_mpi_gpu_settings_env.py
python test_mpi_gpu_settings_env.py --comms local --nworkers 5
python test_mpi_gpu_settings_env.py --nworkers 5

When running with the above command, the number of concurrent evaluations of
the objective function will be 4, as one of the five workers will be the
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@

Execute via one of the following commands (e.g. 5 workers):
mpiexec -np 6 python test_mpi_gpu_settings_mock_nodes_multi_task.py
python test_mpi_gpu_settings_mock_nodes_multi_task.py --comms local --nworkers 5
python test_mpi_gpu_settings_mock_nodes_multi_task.py --nworkers 5

When running with the above command, the number of concurrent evaluations of
the objective function will be 4, as one of the five workers will be the
Expand Down
2 changes: 1 addition & 1 deletion libensemble/tests/functionality_tests/test_mpi_runners.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

Execute via one of the following commands (e.g. 3 workers):
mpiexec -np 4 python test_mpi_runners.py
python test_mpi_runners.py --nworkers 3 --comms local
python test_mpi_runners.py --nworkers 3
python test_mpi_runners.py --nworkers 3 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

Execute via one of the following commands (e.g. 4 workers):
mpiexec -np 5 python test_mpi_runners_subnode.py
python test_mpi_runners_subnode.py --nworkers 4 --comms local
python test_mpi_runners_subnode.py --nworkers 4
python test_mpi_runners_subnode.py --nworkers 4 --comms tcp

The number of concurrent evaluations of the objective function will be 4-1=3.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

Execute via one of the following commands (e.g. 5 workers):
mpiexec -np 6 python test_mpi_runners_subnode_uneven.py
python test_mpi_runners_subnode_uneven.py --nworkers 5 --comms local
python test_mpi_runners_subnode_uneven.py --nworkers 5
python test_mpi_runners_subnode_uneven.py --nworkers 5 --comms tcp
"""

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

Execute via one of the following commands (e.g. 5 workers):
mpiexec -np 6 python test_mpi_runners_supernode_uneven.py
python test_mpi_runners_supernode_uneven.py --nworkers 5 --comms local
python test_mpi_runners_supernode_uneven.py --nworkers 5
"""

import numpy as np
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

Execute via one of the following commands (e.g. 6 workers - one is zero resource):
mpiexec -np 7 python test_mpi_runners_zrw_subnode_uneven.py
python test_mpi_runners_zrw_subnode_uneven.py --nworkers 6 --comms local
python test_mpi_runners_zrw_subnode_uneven.py --nworkers 6
python test_mpi_runners_zrw_subnode_uneven.py --nworkers 6 --comms tcp

The resource sets are split unevenly between the two nodes (e.g. 3 and 2).
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@

Execute via one of the following commands (e.g. 6 workers - one is zero resource):
mpiexec -np 7 python test_mpi_runners_zrw_supernode_uneven.py
python test_mpi_runners_zrw_supernode_uneven.py --nworkers 6 --comms local
python test_mpi_runners_zrw_supernode_uneven.py --nworkers 6
"""

import numpy as np
Expand Down
Loading
Loading