Skip to content

Commit

Permalink
Ref #76
Browse files Browse the repository at this point in the history
Added README files to MinMon, updated OznMon.
  • Loading branch information
EdwardSafford-NOAA committed May 4, 2023
1 parent ace771f commit 704f8c8
Show file tree
Hide file tree
Showing 4 changed files with 150 additions and 84 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
GSI Monitoring Tools

These tools monitor the Gridpoint Statsical Interpolation (GSI) package's data assimiliation, detecting
and reporting missing data sources, low obervational counts, and high penalty values. These machines
and reporting missing data sources, low observational counts, and high penalty values. These machines
are supported: wcoss2, hera, orion, jet, cheyenne, s4.

Suite includes:
Expand All @@ -17,7 +17,7 @@ To use any of the monitors first build the executables. Navigate to GSI-monitor
Then see the README file in the monitor(s) of interest in the GSI-monitor/src directory.

Note that the higher level data extraction components for the MinMon, OznMon, and RadMon have been
relocated to the global-workflow repository and may be run as part of the vrfy job step. To run the
relocated to the global-workflow repository and must be run as part of the vrfy job step. To run the
data extraction within an experimental run set these switches to "YES" in your
expdir/*/config.vrfy file:

Expand Down
93 changes: 54 additions & 39 deletions src/Minimization_Monitor/README
Original file line number Diff line number Diff line change
@@ -1,51 +1,66 @@
README MinMon package

The Minimization Monitor (MinMon) provides a means to regularly
track and report the peformance of the GSI minimization function.
It can be configured to send automated warnings to report irregular
events, such as resets, premature halts, and abnormally large,
final results.
The Minimization Monitor (MinMon) provides a means to visualize and report
on the performance of the GSI minimization function. The package
is supported on wcoss2, hera, orion, cheyenne, jet, and s4 machines.

The MinMon routinely plots the operational GFS results for both
runs (gfs,gdas). See http://www.emc.ncep.noaa.gov/gmb/gdas/index.html
and click on the "GSI Minimization Stats" in the left-most
column at bottom (you may need to scroll down to see it).
The package is organized in two main processes: data_extract and image_gen
(image generation). The data extract piece is now located in the
global-workflow repository and must be run as part of the vrfy job. To
run it ensure your expdir/*/config.vrfy file includes this line:

--- ------------------------------------------------------ ---
--- These are placeholder instructions. ---
--- automated configuration scripts are not yet available. ---
--- ------------------------------------------------------ ---
export VRFYMIN="YES".

## Note to self: need to add automatic set-up of the LOGdir ##
## to the install script. ##

To set up the MinMon with an experimental parallel, first run the
MinMon_install.pl script (in the same directory as this README file).
That will configure your copy of the MinMon for your local use. You
will need to designate the location of your TANKdir, or extracted
data repository. Normally this will be space in a noscrub area.
To use the package:

When you have produced gsistat files from your experimental run
use the [MinMon_package_location]/data_xtrc/ush/MinMon_DE.sh script
to perform the necessay data extraction on the gsistat file(s). The
default values within the MinMon_DE.sh and subsequent scripts are
configured to process gsistat files from the operational GFS/gfs or
GFS/gdas. You will need to override values such as the location of
your gsistat files in order to run the MinMon. You can do this in
your shell or (recommended) you may adapt one of the
run_[gdas|gfs|fv3rt1]_DE.sh scripts to your use.
1. Run GSI-monitor/ush/build.sh. This builds all necessary executables.

The MinMon_DE.sh script will produce output files in your $TANKdir
directory in the structure of suffix/$run.$pdy/$cyc/minmon/[data files].
Once you have data here you can run the
[MinMon_package_location]/image_gen/html/[install script] to configure
a web site for your experiment.
2. The GSI-monitor sets default values for necessary storage, work, and log file
locations in GSI-monitor/parm/Mon_config. If you want to override the defaults
the important settings are:

The last step is to run the image generation script
[MinMon_package_location]/image_gen/ush/MinMon_Plt.sh. Here too there
are several plot_[gfs|gdas|fv3rt1]_IG.sh scripts which can be adapted
to override the default values used by MinMon_Plt.sh.
tankdir -- the location for extracted data storage
ptmp -- log file location
stmp -- work space

Note that these locations are set for each machine.

3. Once you've run an experiment using global-workflow, extracted data should be in
your comrot/$PSLOT directory. You can leave the data there or move it to your
$TANKDIR. If you leave it in place you will have to specify the location for the
image generation and web site create scripts below. If you would like to move the
data to your $TANKDIR location use this script:

GSI-monitor/src/Minimization_Monitor/data_extract/ush/MinMon_CP.sh

4. There is no automatic web site generation script available (yet). The necessary files
to construct a web site are located in GSI-monitor/src/Minimization_Monitor/image_gen/html.
If you have any questions about this please contact me (edward.safford@noaa.gov).

5. Run the image generation. Navigate to GSI-monitor/src/Minimization_Monitor/image_gen/ush
and run:

./MinMon_Plt.sh suffix -p|--pdate -r|--run -t|--tank

suffix $NET value or the name of your parallel.
-p|--pdate Cycle time for which you wish to generate images.
It must be in YYYYMMDDHH format.
-r|--run $RUN value -- gdas (default) or gfs.
-n|--ncyc Number of cycles to be used in time-series plots. If not
specified the default value of 120 cycles is used.
-t|--tank Location of the extracted data files. This is likely to be your
comrot/$PSLOT directory. This is only needed if your extraction
was via global-workflow and data has NOT been copied to $TANKDIR.

6. If you're running on wcoss2 MinMon_Plt.sh will move the generated image files to the web
server (emcrzdm), provided you have password free access set up for your user account.

On all other machines you will have to manually move files from your $TANKDIR/imgn/$NET/$RUN/minmon
directory to the server.



If you encounter problems please send me email and I'll be glad to help:
edward.safford@noaa.gov


68 changes: 68 additions & 0 deletions src/Ozone_Monitor/README
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
README OznMon package

The OznMon (ozone monitoring) package can be used to extract information
from ozone diagnostic files and generate image plots to visualize the results.
The package also may optionally perform data validation and error checking.
The package is supported on wcoss2, hera, orion, cheyenne, jet, and s4
machines.

The package is organized in two processes, the data_extract and image_gen
(image generation). There is also an nwprod directory, which contains the lower
level components of the data_extract portion. The J-job, scripts, and ush scripts
which used to be in the nwprod directory have been moved to the global-workflow
repository and must be run as part of the vrfy job in global-workflow.


To use the package:

1. Run GSI-monitor/ush/build.sh. This builds all necessary executables.

2. The GSI-monitor sets default values for necessary storage, work, and log file
locations in GSI-monitor/parm/Mon_config. If you want to override the defaults
the important settings are:

tankdir -- the location for extracted data storage
ptmp -- log file location
stmp -- work space

Note that these locations are set for each machine.

3. To perform OznMon data extraction as part of the vrfy job in the global-workflow
make sure your expdir/*/config.vrfy file contains this line:

export VRFYOZN="YES"

The extracted data should be in your comrot/$PSLOT directory.

4. There is no automatic web site generation script available (yet). The necessary files
to construct a web site are located in GSI-monitor/src/Ozone_Monitor/image_gen/html.
If you have any questions about this please contact me (edward.safford@noaa.gov).

5. Run the image generation. Navigate to GSI-monitor/src/Ozone_Monitor/image_gen/ush
and run:

./OznMon_Plt.sh suffix -p|--pdate -r|--run -n|--ncyc -t|--tank

suffix $NET value or the name of your parallel.
-p|--pdate Cycle time for which you wish to generate images.
It must be in YYYYMMDDHH format.
-r|--run $RUN value -- gdas (default) or gfs.
-n|--ncyc Number of cycles to be used in time-series plots. If not
specified the default value of 120 cycles is used.
-t|--tank Location of the extracted data files. This is likely to be your
comrot/$PSLOT directory. This is only needed if your extraction
was via global-workflow and NOT copied to $TANKDIR.


6. Move the data and html files to the web server (emcrzdm). If you're on wcoss2 and have
password free access to web server set up for your user account, then run OznMon_Transfer.sh to
push the data files to the server.

On all other machines you will have to manually move files from your $TANKDIR/imgn/$NET/$RUN/radmon
directory to the server.



If you encounter problems please send me contact and I'll be glad to help:
edward.safford@noaa.gov

69 changes: 26 additions & 43 deletions src/Radiance_Monitor/README
Original file line number Diff line number Diff line change
@@ -1,24 +1,19 @@
README RadMon package

The RadMon (radiance monitoring) package can be used to extract certain information
from radiance diagnostic files and visualize the results by plotting the data using
The RadMon (radiance monitoring) package can be used to extract data from
radiance diagnostic files and visualize the results by plotting the data using
javascript or optionally GrADS. The package also may optionally perform data
validation and error checking and report problems using email. The package is
supported on wcoss2, hera, orion, cheyenne, jet, and s4 machines.
validation and error checking. The package is supported on wcoss2, hera,
orion, cheyenne, jet, and s4 machines.

The package is organized in two main pieces, the data_extract and image_gen
The package is organized in two main processes, data_extract and image_gen
(image generation). There is also an nwprod directory, which contains the lower
level components of the data_extract portion. The J-Jobs, scripts, and ush scripts
which used to be in the nwprod directory have been moved to the global_workflow
repository and may be run as part of the vrfy job in global-workflow. Make sure your
expdir/*/config.vrfy file sets export VRFYRAD="YES".
repository and must be run as part of the vrfy job in global-workflow.

The parm directory contains configuration files used by the entire package.
The data_extract and image_gen directories also have parm diretories,
containing configuration files specific to the data extraction and image generation
functions.

In use the package:
To use the package:

1. Run GSI-monitor/ush/build.sh. This builds all necessary executables.

Expand All @@ -30,55 +25,43 @@ tankdir -- the location for extracted data storage
ptmp -- log file location
stmp -- work space

Note that the values are set for each machine.
Note that these locations are set for each machine.

3. If you've run an experiment using global-workflow and have set VRFYRAD="YES", then
extracted data should be in your comrot/$PSLOT directory. You can leave the data there
or move it to your $TANKDIR. If you leave it in place you will have to specify the
location for the data extraction and web site create scripts below. If you would
like to copy the data to your $TANKDIR location use
GSI-monitor/src/Radiance_Monitor/data_extract/ush/RadMon_CP_glb.sh to move the files
into the correct directory structure in $TANKDIR.
3. RadMon data extraction runs as part of the vrfy job in global-workflow.
Make sure your expdir/*/config.vrfy file contains this line:

If you haven't extracted the data via global-workflow, then navigate to
GSI-monitor/src/Radiance_Monitor/data_extract/ush and run:
export VRFYRAD="YES"

The extracted data should be in your comrot/$PSLOT directory. You can leave the
data there or move it to your $TANKDIR. If you leave it in place you will have
to specify the location for the image generation and web site generation scripts
below. If you would like to move the data to your $TANKDIR location run:

./RadMon_DE_glb.sh suffix -p|--pdate -r|--run -s|--radstat

suffix is the $NET value or the name of your parallel.
-p|--pdate is the cycle time for which you wish to perform the extraction.
It must be in YYYYMMDDHH format.
-r|--run is the $RUN value. The default is gdas, which should work in
most cases.
-s|--radstat is the location of the radstat file. This is likely to be your
comrot/$PSLOT directory.

The data extraction will place the extracted data in $TANKDIR/$SUFFIX/$RUN.$CYCLE/$HR/radmon.
./GSI-monitor/src/Radiance_Monitor/data_extract/ush/RadMon_CP_glb.sh

4. Navigate to GSI-monitor/src/Radiance_Monitor/image_gen/html and run Install_html.sh.
This will build and customize the files for a web site using the available
satellite/instrument sources. If data was extracted through global-workflow then use the
satellite/instrument sources. If you didn't move data to your $TANKDIR then use the
-t|--tank argument to specify the data location in comrot/$PSLOT instead of the default
$TANKDIR.

4. Run the image generation. Navigate to GSI-monitor/src/Radiance_Monitor/image_gen/ush
5. Run the image generation. Navigate to GSI-monitor/src/Radiance_Monitor/image_gen/ush
and run:

./RadMon_IG_glb.sh suffix -p|--pdate -r|--run -n|--ncyc -t|--tank

suffix is the $NET value or the name of your parallel.
-p|--pdate is the cycle time for which you wish to generate images.
suffix $NET value or the name of your parallel.
-p|--pdate Cycle time for which you wish to generate images.
It must be in YYYYMMDDHH format.
-r|--run is the $RUN value. The default is gdas, which should work in
most cases.
-n|--ncyc number of cycles to be used in time-series plots. If not
-r|--run $RUN value -- gdas (default) or gfs.
-n|--ncyc Number of cycles to be used in time-series plots. If not
specified the default value of 120 cycles is used.
-t|--tank location of the extracted data files. This is likely to be your
-t|--tank Location of the extracted data files. This is likely to be your
comrot/$PSLOT directory. This is only needed if your extraction
was via global-workflow and NOT copied to $TANKDIR.

5. Move the data and html files to the web server (emcrzdm). If you're on wcoss2 and if
you have set up password free access for your account, then RadMon_IG_glb.sh will
6. Move the data and html files to the web server (emcrzdm). If you're on wcoss2 and
have set up password free access to web server for your account, RadMon_IG_glb.sh will
queue the transfer script and move the files to the server. Alternately the transfer
script, RunTransfer.sh, can be run from the command line.

Expand Down

0 comments on commit 704f8c8

Please sign in to comment.