Skip to content

Commit

Permalink
Final updates for v6 (#62)
Browse files Browse the repository at this point in the history
  • Loading branch information
mkavulich committed Jun 29, 2022
1 parent 4a9aae6 commit f73fc3a
Show file tree
Hide file tree
Showing 13 changed files with 222 additions and 128 deletions.
4 changes: 2 additions & 2 deletions CCPPtechnical/source/Acronyms.rst
Original file line number Diff line number Diff line change
Expand Up @@ -92,6 +92,8 @@ Acronyms
| NUOPC | National Unified Operational Prediction |
| | Capability |
+----------------+---------------------------------------------------+
| NWP | Numerical Weather Prediction |
+----------------+---------------------------------------------------+
| OpenMP | Open Multi-Processing |
+----------------+---------------------------------------------------+
| PBL | Planetary Boundary Layer |
Expand Down Expand Up @@ -138,7 +140,5 @@ Acronyms
+----------------+---------------------------------------------------+
| UFS | Unified Forecast System |
+----------------+---------------------------------------------------+
| VLab | Virtual Laboratory |
+----------------+---------------------------------------------------+
| WRF | Weather Research and Forecasting |
+----------------+---------------------------------------------------+
29 changes: 8 additions & 21 deletions CCPPtechnical/source/AddingNewSchemes.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,34 +4,34 @@
Tips for Adding a New Scheme
****************************************

This chapter contains a brief description on how to add a new scheme to the *CCPP Physics* pool.
This chapter contains a brief description on how to add a new :term:`scheme` to the :term:`CCPP Physics` pool.

* Identify the variables required for the new scheme and check if they are already available for use in the CCPP by checking the metadata information in ``GFS_typedefs.meta`` or by perusing file ``ccpp-framework/doc/DevelopersGuide/CCPP_VARIABLES_{FV3,SCM}.pdf`` generated by ``ccpp_prebuild.py``.

* If the variables are already available, they can be invoked in the scheme’s metadata file and one can skip the rest of this subsection. If the variable required is not available, consider if it can be calculated from the existing variables in the CCPP. If so, an interstitial scheme (such as ``scheme_pre``; see more in :numref:`Chapter %s <CompliantPhysParams>`) can be created to calculate the variable. However, the variable must be defined but not initialized in the host model as the memory for this variable must be allocated on the host model side. Instructions for how to add variables to the host model side is described in :numref:`Chapter %s <Host-side Coding>`.
* If the variables are already available, they can be invoked in the scheme’s metadata file and one can skip the rest of this subsection. If the variable required is not available, consider if it can be calculated from the existing variables in the CCPP. If so, an :term:`interstitial scheme` (such as ``scheme_pre``; see more in :numref:`Chapter %s <CompliantPhysParams>`) can be created to calculate the variable. However, the variable must be defined but not initialized in the :term:`host model<Host model/application>` as the memory for this variable must be allocated on the host model side. Instructions for how to add variables to the host model side is described in :numref:`Chapter %s <Host-side Coding>`.

.. note:: The CCPP framework is capable of performing automatic unit conversions between variables provided by the host model and variables required by the new scheme. See :numref:`Section %s <AutomaticUnitConversions>` for details.
.. note:: The :term:`CCPP framework` is capable of performing automatic unit conversions between variables provided by the host model and variables required by the new scheme. See :numref:`Section %s <AutomaticUnitConversions>` for details.

* If new namelist variables need to be added, the ``GFS_control_type`` DDT should be used. In this case, it is also important to modify the namelist file ``input.nml`` to include the new variable.

* It is important to note that not all data types are persistent in memory. Most variables in the interstitial data type are reset (to zero or other initial values) at the beginning of a physics group and do not persist from one set to another or from one group to another. The diagnostic data type is periodically reset because it is used to accumulate variables for given time intervals. However, there is a small subset of interstitial variables that are set at creation time and are not reset; these are typically dimensions used in other interstitial variables.
* It is important to note that not all data types are persistent in memory. Most variables in the interstitial data type are reset (to zero or other initial values) at the beginning of a physics :term:`group` and do not persist from one :term:`set` to another or from one group to another. The diagnostic data type is periodically reset because it is used to accumulate variables for given time intervals. However, there is a small subset of interstitial variables that are set at creation time and are not reset; these are typically dimensions used in other interstitial variables.

.. note:: If the value of a variable must be remembered from one call to the next, it should not be in the interstitial or diagnostic data types.

* If information from the previous timestep is needed, it is important to identify if the host model readily provides this information. For example, in the Model for Prediction Across Scales (MPAS), variables containing the values of several quantities in the preceding timesteps are available. When that is not the case, as in the UFS Atmosphere, interstitial schemes are needed to compute these variables. As an example, the reader is referred to the GF convective scheme, which makes use of interstitials to obtain the previous timestep information.

* Consider allocating the new variable only when needed (i.e. when the new scheme is used and/or when a certain control flag is set). If this is a viable option, following the existing examples in ``GFS_typedefs.F90`` and ``GFS_typedefs.meta`` for allocating the variable and setting the ``active`` attribute in the metadata correctly.

* If an entirely new variable needs to be added, consult the CCPP standard names dictionary and the rules for creating new standard names at https://github.com/escomp/CCPPStandardNames. If in doubt, use the GitHub discussions page in the CCPP Framework repository (https://github.com/ncar/ccpp-framework) to discuss the suggested new standard name(s) with the CCPP developers.
* If an entirely new variable needs to be added, consult the CCPP :term:`standard names<standard name>` dictionary and the rules for creating new standard names at https://github.com/escomp/CCPPStandardNames. If in doubt, use the GitHub discussions page in the CCPP Framework repository (https://github.com/ncar/ccpp-framework) to discuss the suggested new standard name(s) with the CCPP developers.

* Examine scheme-specific and suite interstitials to see what needs to be replaced/changed; then check existing scheme interstitial and determine what needs to replicated. Identify if your new scheme requires additional interstitial code that must be run before or after the scheme and that cannot be part of the scheme itself, for example because of dependencies on other schemes and/or the order the scheme is run in the SDF.
* Examine scheme-specific and suite interstitials to see what needs to be replaced/changed; then check existing scheme interstitial and determine what needs to replicated. Identify if your new scheme requires additional interstitial code that must be run before or after the scheme and that cannot be part of the scheme itself, for example because of dependencies on other schemes and/or the order the scheme is run in the :term:`SDF`.

* Follow the guidelines outlined in :numref:`Chapter %s <CompliantPhysParams>` to make your scheme CCPP-compliant. Make sure to use an uppercase suffix ``.F90`` to enable C preprocessing.

* Locate the CCPP *prebuild* configuration files for the target host model, for example:

* ``ufs-weather-model/FV3/ccpp/config/ccpp_prebuild_config.py`` for the UFS Atmosphere
* ``ccpp-scm/ccpp/config/ccpp_prebuild_config.py`` for the SCM
* ``ufs-weather-model/FV3/ccpp/config/ccpp_prebuild_config.py`` for the :term:`UFS Atmosphere`
* ``ccpp-scm/ccpp/config/ccpp_prebuild_config.py`` for the :term:`SCM`

* Add the new scheme to the Python dictionary in ``ccpp_prebuild_config.py`` using the same path
as the existing schemes:
Expand All @@ -43,19 +43,6 @@ This chapter contains a brief description on how to add a new scheme to the *CCP
’../some_relative_path/new_scheme.F90’,
...]
* If the new scheme uses optional arguments, add information on which ones to use further down in the configuration file. See existing entries and documentation in the configuration file for the possible options:

.. code-block:: console
OPTIONAL_ARGUMENTS = {
’SCHEME_NAME’ : {
’SCHEME_NAME_run’ : [
# list of all optional arguments in use for this
# model, by standard_name ],
# instead of list [...], can also say ’all’ or ’none’
},
}
* Place new scheme in the same location as existing schemes in the CCPP directory structure, e.g., ``../some_relative_path/new_scheme.F90``.

* Edit the SDF and add the new scheme at the place it should be run. SDFs are located in
Expand Down
7 changes: 4 additions & 3 deletions CCPPtechnical/source/AutoGenPhysCaps.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,11 +4,11 @@
Suite and Group *Caps*
****************************************

The connection between the host model and the physics schemes through the CCPP Framework
is realized with *caps* on both sides as illustrated in :numref:`Figure %s <ccpp_arch_host>`.
The connection between the :term:`host model<Host model/application>` and the physics :term:`schemes<scheme>` through the :term:`CCPP Framework`
is realized with :term:`caps<Physics cap>` on both sides as illustrated in :numref:`Figure %s <ccpp_arch_host>`.
The CCPP *prebuild* script discussed in :numref:`Chapter %s <ConfigBuildOptions>`
generates the *caps* that connect the physics schemes to the CCPP Framework.
This chapter describes the suite and group *caps*,
This chapter describes the :term:`suite<Physics Suite cap>` and :term:`group caps<Group cap>`,
while the host model *caps* are described in :numref:`Chapter %s <Host-side Coding>`.
These *caps* autogenerated by ``ccpp_prebuild.py`` reside in the directory
defined by the ``CAPS_DIR`` variable (see example in :ref:`Listing 8.1 <ccpp_prebuild_example>`).
Expand All @@ -33,6 +33,7 @@ The CCPP *prebuild* step performs the tasks below.
* Populate makefiles with schemes and *caps*.

The *prebuild* step will produce the following files for any host model. Note that the location of these files varies between the host models and whether an in-source or out-of-source build is used.

* List of variables provided by host model and required by physics:

.. code-block:: console
Expand Down
6 changes: 3 additions & 3 deletions CCPPtechnical/source/CCPPDebug.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,15 +22,15 @@ Two categories of debugging with CCPP
Debugging the actual physical parameterizations is identical in CCPP and in physics-driver based models. The parameterizations have access to the same data and debug print statements can be added in exactly the same way.

* Debugging on a suite level
Debugging on a suite level, i.e. outside physical parameterizations, corresponds to debugging on the physics-driver level in traditional, physics-driver based models. In the CCPP, this can be achieved by using dedicated CCPP-compliant debugging schemes, which have access to all the data by requesting them via the metadata files. These schemes can then be called in any place in a SDF, except the ``fast_physics`` group, to produce the desired debugging output. The advantage of this approach is that debugging schemes can be moved from one place to another or duplicated by simply moving/copying a single line in the SDF before recompiling the code. The disadvantage is that different debugging schemes may be needed, depending on the host model and their data structures. For example, the UFS models use blocked data structures. The blocked data structures are commonly known as “GFS types”, are defined in ``GFS_typedefs.F90`` and exposed to the CCPP in ``GFS_typedefs.meta``. The rationale for this storage model is a better cache reuse by breaking up contiguous horizontal grid columns into N blocks with a predefined block size, and allocating each of the GFS types N times. For example, the 3-dimensional air temperature is stored as
Debugging on a suite level, i.e. outside physical parameterizations, corresponds to debugging on the physics-driver level in traditional, physics-driver based models. In the CCPP, this can be achieved by using dedicated CCPP-compliant debugging schemes, which have access to all the data by requesting them via the metadata files. These schemes can then be called in any place in an SDF, except the ``fast_physics`` group, to produce the desired debugging output. The advantage of this approach is that debugging schemes can be moved from one place to another or duplicated by simply moving/copying a single line in the SDF before recompiling the code. The disadvantage is that different debugging schemes may be needed, depending on the host model and their data structures. For example, the UFS models use blocked data structures. The blocked data structures are commonly known as “GFS types”, are defined in ``GFS_typedefs.F90`` and exposed to the CCPP in ``GFS_typedefs.meta``. The rationale for this storage model is a better cache reuse by breaking up contiguous horizontal grid columns into N blocks with a predefined block size, and allocating each of the GFS types N times. For example, the 3-dimensional air temperature is stored as

.. code-block:: console
GFS_data(nb)%Statein%tgrs(1:IM,1:LM) with blocks nb=1,...,N
.. _codeblockends:

Further, the UFS models run a subset of physics inside the dynamical core (“fast physics”), for which the host model data is stored inside the dynamical core and cannot be shared with the traditional (“slow”) physics. As such, different debugging schemes are required for the ``fast_physics`` group.
Further, the UFS models run a subset of physics inside the dynamical core (“:term:`fast physics`”), for which the host model data is stored inside the dynamical core and cannot be shared with the traditional (“:term:`slow<slow physics>`”) physics. As such, different debugging schemes are required for the ``fast_physics`` group.


============================================
Expand Down Expand Up @@ -200,7 +200,7 @@ Below is an example for an SDF that prints debugging output from the standard/pe
How to customize the debugging schemes and the output for arrays in the UFS
---------------------------------------------------------------------------

At the top of ``GFS_debug.F90``, there are customization options in the form of preprocessor directives (CPP ``#ifdef`` etc statements) and a brief documentation. Users not familiar with preprocessor directives are referred to the available documentation such as `Using fpp Preprocessor Directives <https://software.intel.com/content/www/us/en/develop/documentation/fortran-compiler-developer-guide-and-reference/top/optimization-and-programming-guide/fpp-preprocessing/using-fpp-preprocessor-directives.html>`_
At the top of ``GFS_debug.F90``, there are customization options in the form of preprocessor directives (CPP ``#ifdef`` etc statements) and a brief documentation. Users not familiar with preprocessor directives are referred to the available documentation such as `Using fpp Preprocessor Directives <https://www.intel.com/content/www/us/en/develop/documentation/fortran-compiler-oneapi-dev-guide-and-reference/top/optimization-and-programming/fpp-preprocessing/using-fpp-preprocessor-directives.html>`_
At this point, three options exist: (1) full output of every element of each array if none of the #define preprocessor statements is used, (2) minimum, maximum, and mean value of arrays (default for GNU compiler), and (3) minimum, maximum, and 32-bit Adler checksum of arrays (default for Intel compiler). Note that Option (3), the Adler checksum calculation, cannot be used with gfortran (segmentation fault, bug in malloc?).

.. code-block:: console
Expand Down

0 comments on commit f73fc3a

Please sign in to comment.