Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support to use mpi_f08 MPI module #523

Merged
merged 24 commits into from
Mar 19, 2024
Merged
Show file tree
Hide file tree
Changes from 15 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
03790c7
Swtich to mpi_f08 module. Add MPI_Comm DDT
DusanJovic-NOAA Mar 24, 2022
29ce545
Merge remote-tracking branch 'origin/main' into no_arg_mismatch
DusanJovic-NOAA May 13, 2022
15edfdd
Merge remote-tracking branch 'origin/main' into no_arg_mismatch
DusanJovic-NOAA Jun 14, 2022
fb895cc
Merge remote-tracking branch 'origin/main' into no_arg_mismatch
DusanJovic-NOAA Jul 5, 2022
39be461
Merge remote-tracking branch 'origin/main' into no_arg_mismatch
DusanJovic-NOAA Aug 10, 2022
47a1e9c
Fix local MPI_Comm type
DusanJovic-NOAA Sep 28, 2022
9fd0a45
Merge branch 'main' into no_arg_mismatch
DusanJovic-NOAA Oct 13, 2022
1e08927
If MPI is used, find package
climbfuji Nov 7, 2022
c3c4c7c
Fixed cmake if/endif mismatch
DusanJovic-NOAA Nov 18, 2022
202e4d0
Merge remote-tracking branch 'origin/main' into no_arg_mismatch
DusanJovic-NOAA Mar 16, 2023
583b243
Merge remote-tracking branch 'origin/main' into no_arg_mismatch
DusanJovic-NOAA May 4, 2023
1e25847
Merge remote-tracking branch 'origin/main' into no_arg_mismatch
DusanJovic-NOAA Nov 3, 2023
0793203
Merge remote-tracking branch 'origin/main' into no_arg_mismatch
DusanJovic-NOAA Dec 18, 2023
ad0b7c7
Merge remote-tracking branch 'origin/main' into no_arg_mismatch
DusanJovic-NOAA Jan 16, 2024
6fa909f
Merge branch 'main' of https://github.com/NCAR/ccpp-framework into HEAD
climbfuji Feb 23, 2024
2cee086
Make mpi_f08 a mandatory dependency of ccpp-framework
climbfuji Feb 23, 2024
a4524b5
Update test_prebuild/test_blocked_data test_prebuild/test_chunked_dat…
climbfuji Feb 23, 2024
c87dbb7
Merge branch 'main' into no_arg_mismatch
climbfuji Feb 23, 2024
ab3bf01
Merge branch 'no_arg_mismatch' of https://github.com/dusanjovic-noaa/…
climbfuji Feb 23, 2024
13ec121
Bump minimum cmake version required
climbfuji Feb 23, 2024
44eb919
Merge pull request #1 from climbfuji/no_arg_mismatch
DusanJovic-NOAA Feb 26, 2024
c911911
Remove C from MPI find_package
DusanJovic-NOAA Feb 26, 2024
fbec3a8
Merge remote-tracking branch 'origin/main' into no_arg_mismatch
DusanJovic-NOAA Mar 18, 2024
7384d92
Merge remote-tracking branch 'origin/main' into no_arg_mismatch
DusanJovic-NOAA Mar 19, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,12 @@ set(PACKAGE "ccpp-framework")
set(AUTHORS "Dom Heinzeller" "Grant Firl" "Mike Kavulich" "Steve Goldhaber")
string(TIMESTAMP YEAR "%Y")

#------------------------------------------------------------------------------
# Set MPI flags for C/C++/Fortran
if (MPI)
find_package(MPI REQUIRED C Fortran)
endif (MPI)

#------------------------------------------------------------------------------
# Set OpenMP flags for C/C++/Fortran
if (OPENMP)
Expand Down
11 changes: 11 additions & 0 deletions src/ccpp_types.F90
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,10 @@
!
module ccpp_types

#ifdef MPI
use mpi_f08, only: MPI_Comm
#endif
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we still need this #ifdef?
(question applies to all use of #ifdef MPI)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea was that we want to be able to build without MPI altogether. In this case, the changes in this PR define a fake MPI_Comm. The interfaces to the physics don't have to change this way as long as the physics use mpi_comm as defined (no mpi) or re-exported (mpi_f08) by the framework.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens in the physics if it executes MPI commands (e.g., MPI_allgather at timestep init time) but MPI is faked?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All these MPI calls need to be in ifdefs unfortunately - unless someone volunteers to write an entire stub MPI implementation. If we want to go down that route. If we say however that we make MPI a strict requirement for CCPP framework and physics, then we don't need to worry. For most of our models, MPI is needed anyways, but for example the Single Column Model doesn't use it - it's only one column.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You mean that all CCPP physics routines that use MPI also need these ifdefs?
How can they function correctly? You can't just ifdef out a call to MPI_allgather and expect correct behavior. I feel like I'm missing something here.

For a "stub" (really serial) version of MPI, have a look at https://github.com/ESMCI/mpi-serial
Is that what you are talking about?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this is what we need to decide - do we want to be able to exercise CCPP without MPI or make it a hard requirement. I am fine either way. If it's a hard requirement, we have to make this clear in the documentation. Users like the SCM will then have to use MPI with only one mpi task. Since SCM uses spack-stack, MPI is available and loaded anyway, because hdf5/netcdf are built with parallel I/O enabled.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and yes, correct, for some of the mpi calls (that are not bcast for example) you need to put alternative code or use the ESMCI mpi-serial or something similar. Rather messy and maybe not worth the effort?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel like we should make some policy statement about this. I am not sure if it should be informational or an outright standard for the CCPP.
This statement should clarify whether MPI is assumed to be available for CCPP physics schemes.
@peverwhee, any objections to the #ifdef MPI approach? CESM is always built with MPI but this would make schemes (currently only QNEG I think) 'portable'.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can certainly implement the #ifdef if the wider community prefers that, or if it turns out to be difficult to implement MPI in models like SCM, but I think from the SIMA viewpoint we might lean more towards just making MPI a host-model requirement.

This is partially because as @gold2718 pointed out we are always building with MPI already (even when in single-column mode), partially because #ifdef can make the code harder to read and reduce runtime configurability, and partially because I am a little wary about being able to fully replicate the behavior of the MPI calls with our own personally-developed serial call, which means the schemes could have different behavior even if they are "portable".

Also I should note that most of our core, likely more scientifically-interesting, schemes don't directly use MPI at all, and so they should remain portable regardless.

Of course this is just the view from our side of things, and so we will be happy to implement the ifdef method if it makes it easier for NOAA, NRL, or other host model groups.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this summary @nusbaume. From all that I've read here and elsewhere, it looks like we are basically in agreement that we can make mpi_f08 a requirement for CCPP framework and physics. Even single column models will benefit from that when implementing multiple columns processed in parallel (for example in order to see the effect of perturbations on the same column input data ...) instead of running the same executable over and over again for a single column?


!! \section arg_table_ccpp_types
!! \htmlinclude ccpp_types.html
!!
Expand All @@ -27,6 +31,13 @@ module ccpp_types

private
public :: ccpp_t, one
public :: MPI_Comm

#ifndef MPI
type :: MPI_Comm
integer :: dummy = 0
end type MPI_Comm
#endif

!> @var Definition of constant one
integer, parameter :: one = 1
Expand Down
16 changes: 16 additions & 0 deletions src/ccpp_types.meta
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,16 @@
dimensions = ()
type = integer

########################################################################
[ccpp-table-properties]
name = MPI_Comm
type = ddt
dependencies =

[ccpp-arg-table]
name = MPI_Comm
type = ddt

########################################################################

[ccpp-table-properties]
Expand All @@ -79,3 +89,9 @@
units = 1
dimensions = ()
type = integer
[MPI_Comm]
standard_name = MPI_Comm
long_name = definition of type MPI_Comm
units = DDT
dimensions = ()
type = MPI_Comm
Loading