Skip to content

Commit

Permalink
MPI Related Changes:
Browse files Browse the repository at this point in the history
 - Changing MPI threads to MPI ranks
 - Adding information on MPI Fortran 2008 datatypes
 - MPI constants capitalized
 - mpi_ prefix reserved for MPI library
  • Loading branch information
scrasmussen committed Dec 13, 2023
1 parent 12c6595 commit f97a5fa
Show file tree
Hide file tree
Showing 5 changed files with 46 additions and 42 deletions.
24 changes: 14 additions & 10 deletions docs/source/bmi.control_funcs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,26 +18,30 @@ updating.
.. code-block:: java
/* SIDL */
int parallel_initialize(in integer mpi_communicator);
int parallel_initialize(in integer comm);
The `parallel_initialize` function initializes the model for running
in a parallel environment.
It initializes the MPI communicator that the model should use to
communicate between all of its threads.
It sets the MPI communicator that the model should use to
communicate between all of its ranks.
The `parallel_initialize` function must be called before the
`initialize` function.
This communicator could be ``mpi_comm_world``,
This communicator could be ``MPI_COMM_WORLD``,
but it is typically a derived communicator across a subset of the
MPI threads available for the whole simulation.
MPI ranks available for the whole simulation.

**Implementation notes**

* This function is only needed for MPI aware models.
* Models should be refactored, if necessary, to accept the mpi_communicator
* Models should be refactored, if necessary, to accept the MPI communicator
via the model API.
* The MPI communicator is not in all environments represented by an integer.
**TODO**: check with experts.
* The MPI communicator in the Fortran ``mpi_f08`` module is type
``MPI_Comm``. The integer value of variable ``foo`` of type ``MPI_Comm`` can
be accessed with ``foo%MPI_VAL``. This might be needed during interaction with
non-Fortran models and Fortran model using the ``mpi`` module.



[:ref:`control_funcs` | :ref:`basic_model_interface`]

Expand Down Expand Up @@ -73,9 +77,9 @@ formatted.
a string -- a basic type in these languages.
* In C and Fortran, an integer status code indicating success (zero) or failure (nonzero)
is returned. In C++, Java, and Python, an exception is raised on failure.
* *Parallel*: When a model runs across multiple MPI threads, the `parallel_initialize`
* *Parallel*: When a model runs across multiple MPI ranks, the `parallel_initialize`
should be called first to make sure that the model can communicate with
the other MPI threads on which it runs.
the other MPI ranks on which it runs.

[:ref:`control_funcs` | :ref:`basic_model_interface`]

Expand Down
24 changes: 12 additions & 12 deletions docs/source/bmi.getter_setter.rst
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,8 @@ even if the model uses dimensional variables.
variable may not be accessible after calling :ref:`finalize`.
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: the number of items may vary per MPI thread,
hence the size and content of the *dest* argument will vary per MPI thread.
* *Parallel*: the number of items may vary per MPI rank,
hence the size and content of the *dest* argument will vary per MPI rank.

[:ref:`getter_setter_funcs` | :ref:`basic_model_interface`]

Expand Down Expand Up @@ -78,8 +78,8 @@ even if the model's state has changed.
* In Python, a :term:`numpy` array is returned.
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: the reference returned will vary per MPI thread.
It refers only to the data for the thread considered.
* *Parallel*: the reference returned will vary per MPI rank.
It refers only to the data for the rank considered.

[:ref:`getter_setter_funcs` | :ref:`basic_model_interface`]

Expand All @@ -106,9 +106,9 @@ Additionally,

* Both *dest* and *inds* are flattened arrays.
* The *inds* argument is always of type integer.
* *Parallel*: the indices are the *local* indices within the MPI thread.
The number of indices for which data is retrieved may vary per MPI thread.
The length and content of the *dest* argument will vary per MPI thread.
* *Parallel*: the indices are the *local* indices within the MPI rank.
The number of indices for which data is retrieved may vary per MPI rank.
The length and content of the *dest* argument will vary per MPI rank.

[:ref:`getter_setter_funcs` | :ref:`basic_model_interface`]

Expand Down Expand Up @@ -144,8 +144,8 @@ even if the model uses dimensional variables.
variable may not be accessible after calling :ref:`finalize`.
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: the number of items may vary per MPI thread,
hence the size and content of the *src* argument will vary per MPI thread.
* *Parallel*: the number of items may vary per MPI rank,
hence the size and content of the *src* argument will vary per MPI rank.

[:ref:`getter_setter_funcs` | :ref:`basic_model_interface`]

Expand All @@ -171,8 +171,8 @@ Additionally,

* Both *src* and *inds* are flattened arrays.
* The *inds* argument is always of type integer.
* *Parallel*: the indices are the *local* indices within the MPI thread.
The number of indices for which data is set may vary per MPI thread.
The length and content of the *src* argument will vary per MPI thread.
* *Parallel*: the indices are the *local* indices within the MPI rank.
The number of indices for which data is set may vary per MPI rank.
The length and content of the *src* argument will vary per MPI rank.

[:ref:`getter_setter_funcs` | :ref:`basic_model_interface`]
30 changes: 15 additions & 15 deletions docs/source/bmi.grid_funcs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ for :ref:`unstructured <unstructured_grids>` and
size is returned from the function.
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: this function returns the *total number* of elements across all threads.
* *Parallel*: this function returns the *total number* of elements across all ranks.
For a parallel model this is *not* the length of the arrays returned by :ref:`get_grid_x` and :ref:`get_grid_y`.

[:ref:`grid_funcs` | :ref:`basic_model_interface`]
Expand Down Expand Up @@ -320,7 +320,7 @@ See :ref:`model_grids` for more information.
(nonzero) is returned.
* *Parallel*: the coordinates returned only concern the index range
returned by :ref:`get_grid_partition_range`.
The length and content of the *x* argument will vary per MPI thread.
The length and content of the *x* argument will vary per MPI rank.
Where partitions overlap, they MUST return the same coordinate values.

[:ref:`grid_funcs` | :ref:`basic_model_interface`]
Expand Down Expand Up @@ -354,7 +354,7 @@ The length of the resulting one-dimensional array depends on the grid type.
(nonzero) is returned.
* *Parallel*: the coordinates returned only concern the index range
returned by :ref:`get_grid_partition_range`.
The length and content of the *y* argument will vary per MPI thread.
The length and content of the *y* argument will vary per MPI rank.
Where partitions overlap, they MUST return the same coordinate values.

[:ref:`grid_funcs` | :ref:`basic_model_interface`]
Expand Down Expand Up @@ -388,7 +388,7 @@ The length of the resulting one-dimensional array depends on the grid type.
(nonzero) is returned.
* *Parallel*: the coordinates returned only concern the index range
returned by :ref:`get_grid_partition_range`.
The length and content of the *z* argument will vary per MPI thread.
The length and content of the *z* argument will vary per MPI rank.
Where partitions overlap, they MUST return the same coordinate values.

[:ref:`grid_funcs` | :ref:`basic_model_interface`]
Expand Down Expand Up @@ -598,7 +598,7 @@ Get the total number of :term:`nodes <node>` in the grid.
count is returned from the function.
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: this function returns the *total number* of nodes across all threads.
* *Parallel*: this function returns the *total number* of nodes across all ranks.
For a parallel model this is *not* the length of the arrays returned by :ref:`get_grid_x` and :ref:`get_grid_y`.

[:ref:`grid_funcs` | :ref:`basic_model_interface`]
Expand Down Expand Up @@ -649,7 +649,7 @@ Get the total number of :term:`edges <edge>` in the grid.
count is returned from the function.
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: this function returns the *total number* of edges across all threads.
* *Parallel*: this function returns the *total number* of edges across all ranks.
For a parallel model this is *not* the length of the arrays returned by :ref:`get_grid_x` and :ref:`get_grid_y`.

[:ref:`grid_funcs` | :ref:`basic_model_interface`]
Expand Down Expand Up @@ -700,7 +700,7 @@ Get the total number of :term:`faces <face>` in the grid.
count is returned from the function.
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: this function returns the *total number* of faces across all threads.
* *Parallel*: this function returns the *total number* of faces across all ranks.
For a parallel model this is *not* the length of the arrays returned by :ref:`get_grid_x` and :ref:`get_grid_y`.

[:ref:`grid_funcs` | :ref:`basic_model_interface`]
Expand Down Expand Up @@ -756,8 +756,8 @@ node at edge head. The total length of the array is
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: this function returns the connectivity for the edges
and nodes on the current thread, hence the length and content of
*edge_nodes* varies per MPI thread.
and nodes on the current rank, hence the length and content of
*edge_nodes* varies per MPI rank.
The total length of the array is
2 * :ref:`get_grid_partition_edge_count`.

Expand Down Expand Up @@ -788,8 +788,8 @@ The length of the array returned is the sum of the values of
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: this function returns the connectivity for the faces
and edges on the current thread, hence the length and content of
*face_edges* varies per MPI thread.
and edges on the current rank, hence the length and content of
*face_edges* varies per MPI rank.

[:ref:`grid_funcs` | :ref:`basic_model_interface`]

Expand Down Expand Up @@ -824,8 +824,8 @@ the length of the array is the sum of the values of
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: this function returns the connectivity for the faces
and nodes on the current thread, hence the length and content of
*face_nodes* varies per MPI thread.
and nodes on the current rank, hence the length and content of
*face_nodes* varies per MPI rank.

[:ref:`grid_funcs` | :ref:`basic_model_interface`]

Expand Down Expand Up @@ -855,7 +855,7 @@ The number of edges per face is equal to the number of nodes per face.
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: this function returns the number of nodes per face on the
current thread, hence the length and content of
*nodes_per_face* varies per MPI thread.
current rank, hence the length and content of
*nodes_per_face* varies per MPI rank.

[:ref:`grid_funcs` | :ref:`basic_model_interface`]
6 changes: 3 additions & 3 deletions docs/source/bmi.spec.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ grouped by functional category.

**Implementation notes**

* *Parallel*: All functions MUST be called on all MPI threads.
When a function returns a status code, the value returned SHOULD be the same across all MPI threads.
All other return arguments MUST be the same across all MPI threads unless explicitly stated otherwise.
* *Parallel*: All functions MUST be called on all MPI ranks.
When a function returns a status code, the value returned SHOULD be the same across all MPI ranks.
All other return arguments MUST be the same across all MPI ranks unless explicitly stated otherwise.

.. table:: **Table 3:** Summary of BMI functions.
:align: center
Expand Down
4 changes: 2 additions & 2 deletions docs/source/bmi.var_funcs.rst
Original file line number Diff line number Diff line change
Expand Up @@ -150,8 +150,8 @@ a variable; i.e., the number of items multiplied by the size of each item.
amount of memory used by the variable is returned from the function.
* In C and Fortran, an integer status code indicating success (zero) or failure
(nonzero) is returned.
* *Parallel*: the number of items may vary per MPI thread,
hence the value returned will typically vary per MPI thread.
* *Parallel*: the number of items may vary per MPI rank,
hence the value returned will typically vary per MPI rank.

[:ref:`var_funcs` | :ref:`basic_model_interface`]

Expand Down

0 comments on commit f97a5fa

Please sign in to comment.