Skip to content

Commit

Permalink
Adding content for stub build and CCPP phases, formatting and style f…
Browse files Browse the repository at this point in the history
…ixes (#61)
  • Loading branch information
mkavulich committed Jun 28, 2022
1 parent 93f3f31 commit 4a9aae6
Show file tree
Hide file tree
Showing 6 changed files with 89 additions and 26 deletions.
5 changes: 3 additions & 2 deletions CCPPtechnical/source/CCPPDebug.rst
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ Below is an example for an SDF that prints debugging output from the standard/pe
<?xml version="1.0" encoding="UTF-8"?>
<suite name="FV3_GFS_v15p2" lib="ccppphys" ver="4">
<suite name="FV3_GFS_v16" version="1">
<!-- <init></init> -->
<group name="fast_physics">
...
Expand All @@ -172,7 +172,8 @@ Below is an example for an SDF that prints debugging output from the standard/pe
<scheme>GFS_diagtoscreen</scheme>
<scheme>GFS_interstitialtoscreen</scheme>
<scheme>GFS_rrtmg_pre</scheme>
<scheme>rrtmg_sw_pre</scheme>
<scheme>GFS_radiation_surface</scheme>
<scheme>rad_sw_pre</scheme>
<scheme>rrtmg_sw</scheme>
<scheme>rrtmg_sw_post</scheme>
<scheme>rrtmg_lw_pre</scheme>
Expand Down
25 changes: 22 additions & 3 deletions CCPPtechnical/source/CCPPPreBuild.rst
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ An example invocation of running the script (called from the SCM’s top level d
./ccpp/framework/scripts/ccpp_prebuild.py \
--config=./ccpp/config/ccpp_prebuild_config.py \
--suites=FV3_GFS_v15p2 \
--suites=FV3_GFS_v16 \
--verbose
which uses a configuration script located at the specified path. The ``--verbose`` option can be used for more verbose output from the script.
Expand All @@ -168,7 +168,7 @@ The :term:`SDF`\(s) to compile into the executable can be specified using the ``
./ccpp/framework/scripts/ccpp_prebuild.py \
--config=./ccpp/config/ccpp_prebuild_config.py \
--suites=FV3_GFS_v15p2,FV3_GFS_v16beta
--suites=FV3_GFS_v16,RRFS_v1beta
.. note::

Expand All @@ -185,7 +185,7 @@ To remove all files created by ``ccpp_prebuild.py``, for example as part of a ho
.. code-block:: console
./ccpp/framework/scripts/ccpp_prebuild.py --config=./ccpp/config/ccpp_prebuild_config.py \
--suites=FV3_GFS_v15p2,FV3_GFS_v16beta --clean
--suites=FV3_GFS_v16,RRFS_v1beta --clean
=============================
Troubleshooting
Expand Down Expand Up @@ -301,3 +301,22 @@ If invoking the ``ccpp_prebuild.py`` script fails, some message other than the s
Note: One error that the ``ccpp_prebuild.py`` script will not catch is if a physics scheme lists a variable in its actual (Fortran) argument list without a corresponding entry in the subroutine’s variable metadata. This will lead to a compilation error when the autogenerated scheme cap is compiled:

``Error: Missing actual argument for argument 'X' at (1)``

========================================================
CCPP Stub Build
========================================================

New in version 6.0, CCPP includes a *stub* capability, which will build the appropriate basic software caps needed for the compilation of the host model, but not include any of the physics itself. This can be useful for host model debugging, testing "dry" dynamics with no parameterizations, and other use cases where building the whole CCPP physics library would be unnecessary. Currently this capability is only supported for the UFS Atmosphere.

To create the stub software caps, rather than using the host configuration file as described above,
users can use the provided stub config file ``ccpp/framework/stub/ccpp_prebuild_config.py``. From the ``ccpp/framework/stub`` directory,
the prebuild script can be called in this manner to use the CCPP stub build:

.. code-block:: console
../scripts/ccpp_prebuild.py --config=ccpp_prebuild_config.py
cmake . 2>&1 | tee log.cmake
make 2>&1 | tee log.make
The rest of the UFS Atmosphere build can proceed as normal.

65 changes: 52 additions & 13 deletions CCPPtechnical/source/CompliantPhysicsParams.rst
Original file line number Diff line number Diff line change
Expand Up @@ -64,13 +64,33 @@ connect two or more schemes together, or provide code for conversions, initializ
tendencies, for example. The rules and guidelines provided in the following sections apply both to
primary and interstitial schemes.

CCPP-compliant physics parameterizations are broken down into one or more of the following five *phases*:

* The *init* phase, which performs actions needed to set up the scheme before the model integration
begins. Examples of actions needed in this phase include the reading/computation of
lookup tables, setting of constants (as described in :numref:`Section %s <UsingConstants>`), etc.
* The *timestep_init* phase, which performs actions needed at the start of each physics timestep.
Examples of actions needed in this phase include updating of time-based settings (e.g. solar angle),
reading lookup table values, etc.
* The *run* phase, which is the main body of the scheme. Here is where the physics is integrated
forward to the next timestep.
* The *timestep_finalize* phase, which performs post-integration calculations such as computing
statistics or diagnostic tendencies. Not currently used by any scheme.
* The *finalize* phase, which performs cleanup and finalizing actions at the end of model integration.
Examples of actions needed in this phase include deallocating variables, closing files, etc.

The various phases have different rules when it comes to parallelization, especially with regards
to how data is blocked among parallel processes; see :numref:`Section %s <ParallelProgramming>`
for more information.

.. _GeneralRules:

General Rules
=============
A CCPP-compliant scheme is written in the form of Fortran modules. Each scheme must be in its own module, and must include at least one of the
following subroutines (*entry points*): *_init*, *_timestep_init*, *_run*, *_timestep_finalize*,
and *_finalize*. The module name and the subroutine names must be consistent with the
and *_finalize*. Each subroutine corresponds to one of the five *phases* of the CCPP framework as described above.
The module name and the subroutine names must be consistent with the
scheme name; for example, the scheme "schemename" can have the entry points *schemename_init*,
*schemename_run*, etc. The *_run* subroutine contains the
code to execute the scheme. If subroutines *_timestep_init* or *_timestep_finalize* are present,
Expand Down Expand Up @@ -272,7 +292,7 @@ An example metadata file for the CCPP scheme ``mp_thompson.meta`` (with many sec
*Listing 2.3: Example metadata file for a CCPP-compliant physics scheme using a single*
``[ccpp-table-properties]`` *entry and how it defines dependencies for multiple* ``[ccpp-arg-table]`` *entries.
In this example the* ``timestep_init`` *and* ``timestep_finalize`` *phases are not used*.
In this example the* timestep_init *and* timestep_finalize *phases are not used*.

ccpp-arg-table
--------------
Expand Down Expand Up @@ -348,19 +368,21 @@ For each CCPP compliant scheme, the ``ccpp-arg-table`` for a scheme, module or d

It is important to understand the difference between these metadata dimension names.

* ``horizontal_dimension`` refers to all (horizontal) grid columns that an MPI process owns/is responsible for, and that are passed to the physics in the ``init``, ``timestep_init``, ``timestep_final``, and ``final`` phases.
* ``horizontal_dimension`` refers to all (horizontal) grid columns that an MPI process owns/is responsible for, and that are passed to the physics in the *init*, *timestep_init*, *timestep_finalize*, and *finalize* phases.

* ``horizontal_loop_extent`` or, equivalent, ``ccpp_constant_one:horizontal_loop_extent`` stands for a subset of grid columns that are passed to the physics during the time integration, i.e. in the ``run`` phase.
* ``horizontal_loop_extent`` or, equivalent, ``ccpp_constant_one:horizontal_loop_extent`` stands for a subset of grid columns that are passed to the physics during the time integration, i.e. in the *run* phase.

* Note that ``horizontal_loop_extent`` is identical to ``horizontal_dimension`` for host models that pass all columns to the physics during the time integration.

Since physics developers cannot know whether a host model is passing all columns to the physics during the time integration or just a subset of it, the following rules apply to all schemes:

* Variables that depend on the horizontal decomposition must use

* ``horizontal_dimension`` in the metadata tables for the following phases: ``init``, ``timestep_init``, ``timestep_final``, ``final``.
* ``horizontal_dimension`` in the metadata tables for the following phases: *init*, *timestep_init*, *timestep_finalize*, *finalize*.

* ``horizontal_loop_extent`` or ``ccpp_constant_one:horizontal_loop_extent`` in the *run* phase.

* ``horizontal_loop_extent`` or ``ccpp_constant_one:horizontal_loop_extent`` in the ``run`` phase.
.. _StandardNames:

Standard names
==============
Expand Down Expand Up @@ -597,38 +619,55 @@ Within the ``_init`` subroutine body, the constants in the ``my_scheme_common``
end subroutine my_scheme_finalize
end module my_scheme
After this point, physical constants can be imported from ``my_scheme_common`` wherever they are needed. Although there may be some duplication in memory, constants within the scheme will be guaranteed to be consistent with the rest of physics and will only be set/derived once during the initialization phase. Of course, this will require that any constants in ``my_scheme_common`` that are coming from the host model cannot use the Fortran ``parameter`` keyword. To guard against inadvertently using constants in ``my_scheme_common`` without setting them from the host, they should be initially set to some invalid value. The above example also demonstrates the use of ``is_initialized`` to guarantee idempotence of the ``_init`` routine. To clean up during the finalize phase of the scheme, the ``is_initialized`` flag can be set back to false and the constants can be set back to an invalid value.
After this point, physical constants can be imported from ``my_scheme_common`` wherever they are
needed. Although there may be some duplication in memory, constants within the scheme will be
guaranteed to be consistent with the rest of physics and will only be set/derived once during the
initialization phase. Of course, this will require that any constants in ``my_scheme_common`` that
are coming from the host model cannot use the Fortran ``parameter`` keyword. To guard against
inadvertently using constants in ``my_scheme_common`` without setting them from the host, they
should be initially set to some invalid value. The above example also demonstrates the use of
``is_initialized`` to guarantee idempotence of the ``_init`` routine. To clean up during the
finalize phase of the scheme, the ``is_initialized`` flag can be set back to false and the
constants can be set back to an invalid value.

In summary, there are two ways to pass constants to a physics scheme. The first is to directly pass constants via the subroutine interface and continue passing them down to all subroutines as needed. The second is to have a user-specified scheme constants module within the scheme and to sync it once with the physical constants from the host model at initialization time. The approach to use is somewhat up to the developer.

.. note::

Use of the *physcons* module (``ccpp-physics/physics/physcons.F90``) is **not recommended**, since it is specific to FV3 and will be removed in the future.

.. _ParallelProgramming:

Parallel Programming Rules
==========================

Most often, shared memory (OpenMP: Open Multi-Processing) and distributed memory (MPI: Message Passing Interface)
communication is done outside the physics, in which case the loops and arrays already
take into account the sizes of the threaded tasks through their input indices and array
dimensions. The following rules should be observed when including OpenMP or MPI communication
in a physics scheme:
dimensions.

The following rules should be observed when including OpenMP or MPI communication in a physics scheme:

* CCPP standards require that in every phase but the *run* phase, blocked data structures must be combined so that
their entire contents are available to a given MPI task (i.e. the data structures can not be further subdivided, or
"chunked", within those phases). The *run* phase may be called by multiple threads in parallel, so data structures
may be divided into blocks for that phase.

* Shared-memory (OpenMP) parallelization inside a scheme is allowed with the restriction
that the number of OpenMP threads to use is obtained from the host model as an ``intent(in)``
argument in the argument list (:ref:`Listing 6.2 <MandatoryVariables>`).

* MPI communication is allowed in the ``_timestep_init``, ``_init``, ``_finalize``,
and ``_timestep_finalize`` phases for the purpose of computing, reading or writing
* MPI communication is allowed in the *init*, *timestep_init*, *timestep_finalize*, and *finalize*,
phases for the purpose of computing, reading or writing
scheme-specific data that is independent of the host model’s data decomposition.

* If MPI is used, it is restricted to global communications: barrier, broadcast,
gather, scatter, reduction. Point-to-point communication is not allowed. The
MPI communicator must be passed to the physics scheme by the host model, the
use of ``MPI_COMM_WORLD`` is not allowed (:ref:`see list of mandatory variables <MandatoryVariables>`).

* An example of a valid use of MPI is the initial read of a lookup table of aerosol
properties by one or more MPI processes, and its subsequent broadcast to all processes.
* An example of a valid use of MPI is the initial read of a lookup table of aerosol
properties by one or more MPI processes, and its subsequent broadcast to all processes.

* The implementation of reading and writing of data must be scalable to perform
efficiently from a few to thousands of tasks.
Expand Down
2 changes: 1 addition & 1 deletion CCPPtechnical/source/ConstructingSuite.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ The concept of grouping physics in the :term:`SDF` (reflected in the ``<group na
-----------------
Subcycling
-----------------
The :term:`SDF` allows subcycling of schemes, or calling a subset of schemes at a smaller time step than others. The ``<subcycle loop = n>`` element in the :term:`SDF` controls this function. All schemes within such an element are called ``n`` times during one ``ccpp_physics_run`` call. An example of this is found in the ``FV3_GFS_v15.xml`` :term:`SDF`, where the surface schemes are executed twice for each timestep (implementing a predictor/corrector paradigm):
The :term:`SDF` allows subcycling of schemes, or calling a subset of schemes at a smaller time step than others. The ``<subcycle loop = n>`` element in the :term:`SDF` controls this function. All schemes within such an element are called ``n`` times during one ``ccpp_physics_run`` call. An example of this is found in the ``suite_FV3_GFS_v16.xml`` :term:`SDF`, where the surface schemes are executed twice for each timestep (implementing a predictor/corrector paradigm):

.. code-block:: xml
Expand Down
8 changes: 6 additions & 2 deletions CCPPtechnical/source/HostSideCoding.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ This chapter describes the connection of a host model with the pool of :term:`CC
Variable Requirements on the Host Model Side
==================================================

All variables required to communicate between the host model and the physics, as well as to communicate between physics schemes, need to be allocated by the host model. An exception is variables ``errflg``, ``errmsg``, ``loop_cnt``, ``blk_no``, and ``thrd_no``, which are allocated by the CCPP Framework, as explained in :numref:`Section %s <DataStructureTransfer>`. A list of all variables required for the current pool of physics can be found in ``ccpp-framework/doc/DevelopersGuide/CCPP_VARIABLES_XYZ.pdf`` (XYZ: SCM, FV3).
All variables required to communicate between the host model and the physics, as well as to communicate between physics schemes, need to be allocated by the host model. An exception is variables ``errflg``, ``errmsg``, ``loop_cnt``, ``loop_max``, ``blk_no``, and ``thrd_no``, which are allocated by the CCPP Framework, as explained in :numref:`Section %s <DataStructureTransfer>`. See :numref:`Section %s <StandardNames>` for information about the variables required for the current pool of CCPP physics.

At present, only two types of variable definitions are supported by the CCPP Framework:

Expand All @@ -20,7 +20,7 @@ At present, only two types of variable definitions are supported by the CCPP Fra
.. _VariableTablesHostModel:

==================================================
Metadata for Variable in the Host Model
Metadata for Variables in the Host Model
==================================================

To establish the link between host model variables and physics scheme variables, the host model must provide metadata information similar to those presented in :numref:`Section %s <MetadataRules>`. The host model can have multiple metadata files (``.meta``), each with the required ``[ccpp-table-properties]`` section and the related ``[ccpp-arg-table]`` sections. The host model Fortran files contain three-line snippets to indicate the location for insertion of the metadata information contained in the corresponding section in the ``.meta`` file.
Expand Down Expand Up @@ -722,8 +722,12 @@ The purpose of the host model *cap* is to abstract away the communication betwee
*Listing 6.7: Fortran template for a CCPP host model cap. After each call to ``ccpp_physics_*``, the host model should check the return code ``ierr`` and handle any errors (omitted for readability).*

Readers are referred to the actual implementations of the cap functions in the CCPP-SCM and the UFS for further information. For the SCM, the cap functions are implemented in:

* ``ccpp-scm/scm/src/scm.F90``
* ``ccpp-scm/scm/src/scm_type_defs.F90``
* ``ccpp-scm/scm/src/scm_setup.F90``
* ``ccpp-scm/scm/src/scm_time_integration.F90``

For the UFS, the cap functions can be found in ``ufs-weather-model/FV3/ccpp/driver/CCPP_driver.F90``.


10 changes: 5 additions & 5 deletions CCPPtechnical/source/ScientificDocRules.inc
Original file line number Diff line number Diff line change
Expand Up @@ -541,11 +541,11 @@ In order to generate the doxygen-based documentation, you will need to follow fi

``doxygen ccpp_doxyfile``

Running this command may generate warnings or errors that need to be fixed
in order to produce proper output. The location and type of output
(HTML, LaTeX, etc.) are specified in the configuration file.
The generated HTML documentation can be viewed by pointing an HTML
browser to the ``index.html`` file in the ``./docs/doc/html/`` directory.
Running this command may generate warnings or errors that need to be fixed
in order to produce proper output. The location and type of output
(HTML, LaTeX, etc.) are specified in the configuration file.
The generated HTML documentation can be viewed by pointing an HTML
browser to the ``index.html`` file in the ``./docs/doc/html/`` directory.

For precise instructions or other help creating the scientific documentation, contact the CCPP
Forum at https://dtcenter.org/forum/ccpp-user-support.

0 comments on commit 4a9aae6

Please sign in to comment.