diff --git a/docs/source/clone.rst b/docs/source/clone.rst index e261cea00d..b9edc9ef3e 100644 --- a/docs/source/clone.rst +++ b/docs/source/clone.rst @@ -6,9 +6,11 @@ Clone and build Global Workflow Quick Instructions ^^^^^^^^^^^^^^^^^^ -Quick clone/build/link instructions (more detailed instructions below). Note, Here we are making the assumption that you are using the workflow to run an experiment and so are working from the authoritative repository. If you are going to be a developer then follow the instructions in :doc: `development.rst`. Once you do that you can follow the instructions here with the only thing different will be the repository you are cloning from. +Quick clone/build/link instructions (more detailed instructions below). Note, here we are making the assumption that you are using the workflow to run an experiment and so are working from the authoritative repository. If you are using a development branch then follow the instructions in :doc:`development.rst`. Once you do that you can follow the instructions here with the only difference being the repository/fork you are cloning from. -For forecast-only (coupled or uncoupled):: +For forecast-only (coupled or uncoupled): + +:: git clone https://github.com/NOAA-EMC/global-workflow.git cd global-workflow/sorc @@ -16,7 +18,9 @@ For forecast-only (coupled or uncoupled):: ./build_all.sh ./link_workflow.sh -For cycled (GSI):: +For cycled (w/ data assimilation): + +:: git clone https://github.com/NOAA-EMC/global-workflow.git cd global-workflow/sorc @@ -32,31 +36,43 @@ Clone workflow and component repositories Workflow ******** -https method:: +https method: + +:: git clone https://github.com/NOAA-EMC/global-workflow.git -ssh method (using a password protected SSH key):: +ssh method (using a password protected SSH key): + +:: git clone git@github.com:NOAA-EMC/global-workflow.git Note: when using ssh methods you need to make sure that your GitHub account is configured for the computer from which you are accessing the repository (See `this link `_) -Check what you just cloned (by default you will have only the develop branch):: +Check what you just cloned (by default you will have only the develop branch): + +:: cd global-workflow git branch * develop -You now have a cloned copy of the global-workflow git repository. To checkout a branch or tag in your clone:: +You now have a cloned copy of the global-workflow git repository. To checkout a branch or tag in your clone: + +:: git checkout BRANCH_NAME -Note: Branch must already exist. If it does not you need to make a new branch using the ``-b`` flag:: +Note: Branch must already exist. If it does not you need to make a new branch using the ``-b`` flag: + +:: git checkout -b BRANCH_NAME -The ``checkout`` command will checkout BRANCH_NAME and switch your clone to that branch. Example:: +The ``checkout`` command will checkout BRANCH_NAME and switch your clone to that branch. Example: + +:: git checkout my_branch git branch @@ -67,27 +83,35 @@ The ``checkout`` command will checkout BRANCH_NAME and switch your clone to that Components ********** -Once you have cloned the workflow repository it's time to checkout/clone its components. The components will be checked out under the /sorc folder via a script called checkout.sh. Run the script with no arguments for forecast-only:: +Once you have cloned the workflow repository it's time to checkout/clone its components. The components will be checked out under the ``/sorc`` folder via a script called checkout.sh. Run the script with no arguments for forecast-only: + +:: cd sorc ./checkout.sh -Or with the `-g` switch to include GSI for cycling:: +Or with the ``-g`` switch to include data assimilation (GSI) for cycling: + +:: cd sorc ./checkout.sh -g -If wishing to run with the operational GTG UPP and WAFS (only for select users) provide the -o flag with checkout.sh:: +If wishing to run with the operational GTG UPP and WAFS (only for select users) provide the ``-o`` flag with checkout.sh: + +:: ./checkout.sh -o -Each component cloned via checkout.sh will have a log (checkout-COMPONENT.log). Check the screen output and logs for clone errors. +Each component cloned via checkout.sh will have a log (``/sorc/logs/checkout-COMPONENT.log``). Check the screen output and logs for clone errors. ^^^^^^^^^^^^^^^^ Build components ^^^^^^^^^^^^^^^^ -Under the /sorc folder is a script to build all components called ``build_all.sh``. After running checkout.sh run this script to build all components codes:: +Under the ``/sorc`` folder is a script to build all components called ``build_all.sh``. After running checkout.sh run this script to build all components codes: + +:: ./build_all.sh [-a UFS_app][-c build_config][-h][-v] -a UFS_app: @@ -103,17 +127,20 @@ A partial build option is also available via two methods: a) modify gfs_build.cfg config file to disable/enable particular builds and then rerun build_all.sh - b) run individual build scripts also available in /sorc folder for each component or group of codes + b) run individual build scripts also available in ``/sorc`` folder for each component or group of codes ^^^^^^^^^^^^^^^ Link components ^^^^^^^^^^^^^^^ -At runtime the global-workflow needs all pieces in place within the main superstructure. To establish this a link script is run to create symlinks from the top level folders down to component files checked out in /sorc folders. +At runtime the global-workflow needs all pieces in place within the main superstructure. To establish this a link script is run to create symlinks from the top level folders down to component files checked out in ``/sorc`` folders. + +After running the checkout and build scripts run the link script: -After running the checkout and build scripts run the link script:: +:: ./link_workflow.sh [-o] - where: - -o: Run in operations (NCO) mode. This creates copies instead of using symlinks and is generally only used by NCO during installation into production. + +Where: + ``-o``: Run in operations (NCO) mode. This creates copies instead of using symlinks and is generally only used by NCO during installation into production. diff --git a/docs/source/components.rst b/docs/source/components.rst index 0f392fc9b5..8ae6d2e774 100644 --- a/docs/source/components.rst +++ b/docs/source/components.rst @@ -13,7 +13,7 @@ The major components of the system are: * Post-processing * Verification -The Global Workflow repository contains the workflow layer and, after running the checkout script, the code and scripts for the analysis, forecast, and post-processing components. Any non-workflow component is known as a sub-module. All of the sub-modules of the system reside in their respective repositories on GitHub. The global-workflow sub-modules are obtained by running the checkout script found under the /sorc folder. +The Global Workflow repository contains the workflow and script layers. After running the checkout script, the code and additional offline scripts for the analysis, forecast, and post-processing components will be present. Any non-workflow component is known as a sub-module. All of the sub-modules of the system reside in their respective repositories on GitHub. The global-workflow sub-modules are obtained by running the checkout script found under the /sorc folder. ====================== Component repositories @@ -21,18 +21,18 @@ Component repositories Components checked out via sorc/checkout.sh: +* **GFS UTILS** (https://github.com/ufs-community/gfs_utils): Utility codes needed by Global Workflow to run the GFS configuration * **UFS-Weather-Model** (https://github.com/ufs-community/ufs-weather-model): This is the core model used by the Global-Workflow to provide forecasts. The UFS-weather-model repository is an umbrella repository consisting of cooupled component earth systeme that are all checked out when we check out the code at the top level of the repoitory * **GSI** (https://github.com/NOAA-EMC/GSI): This is the core code base for atmospheric Data Assimilation -* **GSI UTILS** (https://github.com/NOAA-EMC/GSI-UTILS): Utility codes needed by GSI to create analysis +* **GSI UTILS** (https://github.com/NOAA-EMC/GSI-Utils): Utility codes needed by GSI to create analysis * **GSI Monitor** (https://github.com/NOAA-EMC/GSI-Monitor): These tools monitor the GSI package's data assimilation, detecting and reporting missing data sources, low observation counts, and high penalty values * **GLDAS** (https://github.com/NOAA-EMC/GLDAS): Code base for Land Data Assimiation * **GDAS** (https://github.com/NOAA-EMC/GDASApp): Jedi based Data Assimilation system. This system is currently being developed for marine Data Assimilation and in time will replace GSI for atmospheric data assimilation as well * **UFS UTILS** (https://github.com/ufs-community/UFS_UTILS): Utility codes needed for UFS-weather-model -* **GFS UTILS** (https://github.com/ufs-community/gfs_utils): Utility codes needed by Global Workflow to run the GFS configuration * **Verif global** (https://github.com/NOAA-EMC/EMC_verif-global): Verification package to evaluate GFS parallels. It uses MET and METplus. At this moment the verification package is limited to providing atmospheric metrics only * **GFS WAFS** (https://github.com/NOAA-EMC/EMC_gfs_wafs): Additional post processing products for Aircrafts -Note, when running the system in forecast mode only the Data Assimilation conmponents are not needed and are hence not checked out. +Note, when running the system in forecast-only mode the Data Assimilation components are not needed and are hence not checked out. ===================== External dependencies @@ -55,7 +55,7 @@ Observation data (OBSPROC/prep) Data **** -Observation data, also known as dump data, is prepared in production and then archived in a global dump archive (GDA) for use by users when running cycled experiment. The GDA (identified as ``$DMPDIR`` in the workflow) is available on supported platforms and the workflow system knows where to find the data. +Observation data, also known as dump data, is prepared in production and then archived in a global dump archive (GDA) for use by users when running cycled experiments. The GDA (identified as ``$DMPDIR`` in the workflow) is available on supported platforms and the workflow system knows where to find the data. * Hera: /scratch1/NCEPDEV/global/glopara/dump * Orion: /work/noaa/rstprod/dump @@ -68,23 +68,25 @@ Global Dump Archive Structure The global dump archive (GDA) mimics the structure of its production source: ``DMPDIR/CDUMP.PDY/[CC/atmos/]FILES`` -The ``CDUMP`` is either gdas, gfs, or rtofs. All three contain production output for each day (``PDY``). The gdas and gfs folders are further broken into cycle (``CC``) and component (atmos). +The ``CDUMP`` is either gdas, gfs, or rtofs. All three contain production output for each day (``PDY``). The gdas and gfs folders are further broken into cycle (``CC``) and component (``atmos``). The GDA also contains special versions of some datasets and experimental data that is being evaluated ahead of implementation into production. The following subfolder suffixes exist: +--------+------------------------------------------------------------------------------------------------------+ -| Suffix | What | +| SUFFIX | WHAT | +--------+------------------------------------------------------------------------------------------------------+ -| nr | Non-restricted versions of restricted files in production. | +| nr | Non-restricted versions of restricted files in production. Produced in production. Restriced data is | +| | fully stripped from files. These files remain as is. | +--------+------------------------------------------------------------------------------------------------------+ | ur | Un-restricted versions of restricted files in production. Produced and archived on a 48hrs delay. | +| | Some restricted datasets are unrestricted. Data amounts: restricted > un-restricted > non-restricted | +--------+------------------------------------------------------------------------------------------------------+ | x | Experimental global datasets being evaluated for production. Dates and types vary depending on | | | upcoming global upgrades. | +--------+------------------------------------------------------------------------------------------------------+ -| y | Similar to "x" but only used when there is a duplicate experimental file that is in the x subfolder | -| | with the same name. These files will be different from both the production versions | -| | (if that exists already) and the x versions. This suffix is rarely used. | +| y | Similar to "x" but only used when there is a duplicate experimental file in the x subfolder with the | +| | same name. These files will be different from both the production versions (if that exists already) | +| | and the x versions. This suffix is rarely used. | +--------+------------------------------------------------------------------------------------------------------+ | p | Pre-production copy of full dump dataset, as produced by NCO during final 30-day parallel ahead of | | | implementation. Not always archived. | @@ -94,9 +96,9 @@ The GDA also contains special versions of some datasets and experimental data th Data processing *************** -Upstream of the global-workflow is the collection, quality control, and packaging of observed weather. The handling of that data is done by the OBSPROC group codes and scripts. The global-workflow uses two packages from OBSPROC to run its prep step to prepare observation data for use by the analysis system: +Upstream of the global-workflow is the collection, quality control, and packaging of observed weather. The handling of that data is done by the OBSPROC group codes and scripts. The global-workflow uses two packages from OBSPROC to run its prep step to prepare observation (dump) data for use by the analysis system: 1. https://github.com/NOAA-EMC/obsproc 2. https://github.com/NOAA-EMC/prepobs -Both package versions and locations on support platforms are set in the global-workflow system configs. +Package versions and locations on supported platforms are set in the global-workflow system configs, modulefiles, and version files. diff --git a/docs/source/development.rst b/docs/source/development.rst index 6c2c0ce4fb..f54c131447 100644 --- a/docs/source/development.rst +++ b/docs/source/development.rst @@ -21,7 +21,7 @@ Where to do development? * In authoritative (main) repository: - - Work for the upcoming implementation (who: members of global-workflow-developers team) + - Work for upcoming implementation (who: members of global-workflow-developers team) - Major new features or port work (who: generally code managers and/or members of global-workflow-developers team) * In a fork: @@ -38,7 +38,7 @@ Protected branches The following global-workflow branches are protected by the code management team: * develop (HEAD) -* operations (kept aligned with current production) +* dev/gfs.v16 (kept aligned with current production, as well as ingests bug fixes and updates between release branches) These protected branches require the following to accept changes: @@ -58,8 +58,8 @@ The following steps should be followed in order to make changes to the develop b #. Issue - Open issue to document changes. Reference this issue in commits to your branches (e.g. ``git commit -m "Issue #23 - blah changes for what-not code"``) Click `here `__ to open a new global-workflow issue. #. GitFlow - Follow `GitFlow `_ procedures for development (branch names, forking vs branching, etc.). Read more `here `__ about GitFlow at EMC. #. To fork or not to fork? - If not working within authoritative repository create a fork of the authoritative repository. Read more `here `__ about forking in GitHub. - #. Branch - Create branch in either authoritative repository or fork of authoritative repository. See the `Where to do development? `_ section for how to determine where. Follow Gitflow conventions when creating branch. - #. Development - Perform and test changes in branch. Document work in issue and mention issue number in commit messages to link your work to the issue See `Commit Messages `_ section below. Depending on changes the code manager may request or perform additional pre-commit tests. + #. Branch - Create branch in either authoritative repository or fork of authoritative repository. See the `Where to do development? `_ section for how to determine where. Follow GitFlow conventions when creating branch. + #. Development - Perform and test changes in branch. Document work in issue and mention issue number in commit messages to link your work to the issue. See `Commit Messages `_ section below. Depending on changes the code manager may request or perform additional pre-commit tests. #. Pull request - When ready to merge changes back to develop branch, the lead developer should initiate a pull request (PR) of your branch (either fork or not) into the develop branch. Read `here `__ about pull requests in GitHub. Provide some information about the PR in the proper field, add at least one reviewer to the PR and assign the PR to a code manager. #. Complete - When review and testing is complete the code manager will complete the pull request and subsequent merge/commit. #. Cleanup - When complete the lead developer should delete the branch and close the issue. "Closing keywords" can be used in the PR to automatically close associated issues. @@ -94,6 +94,8 @@ Commit message standards * The final line of the commit message should include tags to relevant issues (e.g. ``Refs: #217, #300``) Here is the example commit message from the article linked above; it includes descriptions of what would be in each part of the commit message for guidance: + +:: Summarize changes in around 50 characters or less @@ -118,12 +120,12 @@ Here is the example commit message from the article linked above; it includes de vary here If you use an issue tracker, put references to them at the bottom, - like this:: + like this: Resolves: #123 See also: #456, #789 -A detailed commit message is very useful for documenting changes +A detailed commit message is very useful for documenting changes. .. _sync: @@ -131,9 +133,9 @@ A detailed commit message is very useful for documenting changes How to sync fork with the authoritative repository ================================================== -As development in the main authoritative repository moves forward you will need to sync your fork's branches to stay up-to-date. Below is an example of how to sync your fork's copy of a branch with the authoritative repository copy. The branch name for the example will be "feature/new_thing". Click `here `__ for documentation on syncing forks. +As development in the main authoritative repository moves forward you will need to sync your fork branches to stay up-to-date. Below is an example of how to sync your fork copy of a branch with the authoritative repository copy. The branch name for the example will be "feature/new_thing". Click `here `__ for documentation on syncing forks. -1. Clone your fork and checkout branch that needs syncing +1. Clone your fork and checkout branch that needs syncing: :: @@ -141,25 +143,25 @@ As development in the main authoritative repository moves forward you will need cd fork git checkout feature/my_new_thing -2. Add upstream info to your clone so it knows where to merge from. The term "upstream" refers to the authoritative repository from which the fork was create +2. Add upstream info to your clone so it knows where to merge from. The term "upstream" refers to the authoritative repository from which the fork was created. :: git remote add upstream https://github.com/NOAA-EMC/global-workflow.git -3. Fetch upstream information into clone +3. Fetch upstream information into clone: :: git fetch upstream -Later on you can update your fork's remote information by doing the following command +Later on you can update your fork remote information by doing the following command: :: git remote update -4. Merge upstream feature/other_new_thing into your branch +4. Merge upstream ``feature/other_new_thing`` into your branch: :: @@ -167,7 +169,7 @@ Later on you can update your fork's remote information by doing the following co 5. Resolve any conflicts and perform any needed "add"s or "commit"s for conflict resolution. -6. Push the merged copy back up to your fork (origin) +6. Push the merged copy back up to your fork (origin): :: @@ -177,6 +179,8 @@ Later on you can update your fork's remote information by doing the following co Done! -Moving forward you'll want to perform the "remote update" command regularly to update the metadata for the remote/upstream repository in your fork (e.g. pull in metadata for branches made in auth repo after you forked it):: +Moving forward you'll want to perform the "remote update" command regularly to update the metadata for the remote/upstream repository in your fork (e.g. pull in metadata for branches made in auth repo after you forked it). + +:: git remote update diff --git a/docs/source/hpc.rst b/docs/source/hpc.rst index a92bc79e1a..7161e2b742 100644 --- a/docs/source/hpc.rst +++ b/docs/source/hpc.rst @@ -33,7 +33,7 @@ NOTE: Only non-restricted data is available on S4. To request rstprod access, do either a and/or b below: -a) If you need restricted data access on WCOSS, fill out form here: +a) If you need restricted data access on WCOSS2, read details about restricted data and fill out form here: https://www.nco.ncep.noaa.gov/sib/restricted_data/restricted_data_sib/ @@ -67,7 +67,7 @@ For any merge with multiple commits, a short synopsis of the merge should appear Version ^^^^^^^ -It is advised to use Git v2+ when available. At the time of writing this documentation the swfault Git clients on the different machines were as noted in the table below. It is recommended that you check the default modules before loading recommneded ones: +It is advised to use Git v2+ when available. At the time of writing this documentation the default Git clients on the different machines were as noted in the table below. It is recommended that you check the default modules before loading recommended ones: +---------+----------+---------------------------------------+ | Machine | Default | Recommended | @@ -85,7 +85,9 @@ It is advised to use Git v2+ when available. At the time of writing this documen Output format ^^^^^^^^^^^^^ -For proper display of Git command output (e.g. git branch and git diff) type the following once on both machines:: +For proper display of Git command output (e.g. git branch and git diff) type the following once per machine: + +:: git config --global core.pager 'less -FRX' diff --git a/docs/source/index.rst b/docs/source/index.rst index 7a73e88c26..251077ad71 100644 --- a/docs/source/index.rst +++ b/docs/source/index.rst @@ -3,30 +3,30 @@ Global Workflow ############### -**Global-workflow** is the end-to-end workflow designed to run global configurations of medium range weather forecasting for the UFS weather model. It supports both development and operational implementations. In its current format it supports the Global Forecast System (GFS) and the Gobal Ensemble Forecast System (GEFS) configurations +**Global-workflow** is the end-to-end workflow designed to run global configurations of medium range weather forecasting for the UFS weather model. It supports both development and operational implementations. In its current format it supports the Global Forecast System (GFS) and the Global Ensemble Forecast System (GEFS) configurations ====== Status ====== -* State of develop (HEAD) branch: early GFSv17+ development +* State of develop (HEAD) branch: GFSv17+ development -* State of operations branch: GFS v16.3.5 `tag: [gfs.v16.3.5] `_ +* State of operations: GFS v16.3.5 `tag: [gfs.v16.3.5] `_ -============== -Code managers: -============== +============= +Code managers +============= * Kate Friedman - @KateFriedman-NOAA / kate.friedman@noaa.gov * Walter Kolczynski - @WalterKolczynski-NOAA / walter.kolczynski@noaa.gov -============== -Announcements: -============== +============= +Announcements +============= General updates: NOAA employees and affiliates can join the gfs-announce distribution list to get updates on the GFS and global-workflow. Contact Kate Friedman (kate.friedman@noaa.gov) and Walter Kolczynski (walter.kolczynski@noaa.gov) to get added to the list or removed from it. -GitHub updates: Users should adjust their "Watch" settings for this repo so they receive notifications as they'd like to. Find the "Watch" or "Unwatch" button towards the top right of this page and click it to adjust how you watch this repo. +GitHub updates: Users should adjust their "Watch" settings for this repo so they receive notifications as they'd like to. Find the "Watch" or "Unwatch" button towards the top right of the `authoritative global-workflow repository page `_ and click it to adjust how you watch the repo. .. toctree:: :numbered: diff --git a/docs/source/init.rst b/docs/source/init.rst index e674260319..3b9811b500 100644 --- a/docs/source/init.rst +++ b/docs/source/init.rst @@ -29,13 +29,13 @@ Automated Generation Cycled mode *********** -Not yet supported. See Manual Generation section below for how to create your ICs yourself (outside of workflow). +Not yet supported. See :ref:`Manual Generation` section below for how to create your ICs yourself (outside of workflow). ***************************** -Free-forecast mode (atm-only) +Forecast-only mode (atm-only) ***************************** -Free-forecast mode in global workflow includes ``getic`` and ``init`` jobs for the gfs suite. The ``getic`` job pulls inputs for ``chgres_cube`` (init job) or warm start ICs into your ``ROTDIR/COMROT``. The ``init`` job then ingests those files to produce initial conditions for your experiment. +Forecast-only mode in global workflow includes ``getic`` and ``init`` jobs for the gfs suite. The ``getic`` job pulls inputs for ``chgres_cube`` (init job) or warm start ICs into your ``ROTDIR/COMROT``. The ``init`` job then ingests those files to produce initial conditions for your experiment. Users on machines without HPSS access (e.g. Orion) need to perform the ``getic`` step manually and stage inputs for the ``init`` job. The table below lists the needed files for ``init`` and where to place them in your ``ROTDIR``. @@ -78,7 +78,7 @@ Operations/production output location on HPSS: /NCEPPROD/hpssprod/runhistory/rh For HPSS path, see retrospective table in :ref:`pre-production parallel section `: below ********************* -Free-forecast coupled +Forecast-only coupled ********************* Coupled initial conditions are currently only generated offline and copied prior to the forecast run. Prototype initial conditions will automatically be used when setting up an experiment as an S2SW app, there is no need to do anything additional. Copies of initial conditions from the prototype runs are currently maintained on Hera, Orion, and WCOSS2. The locations used are determined by ``parm/config/config.coupled_ic``. If you need prototype ICs on another machine, please contact Walter (Walter.Kolczynski@noaa.gov). @@ -95,20 +95,34 @@ Cold starts The following information is for users needing to generate initial conditions for a cycled experiment that will run at a different resolution or layer amount than the operational GFS (C768C384L127). -The ``chgres_cube`` code is available from the `UFS_UTILS repository `_ on GitHub and can be used to convert GFS ICs to a different resolution or number of layers. Users may clone the develop/HEAD branch or the same version used by global-workflow develop (found in sorc/checkout.sh). The ``chgres_cube`` code/scripts currently support the following GFS inputs: +The ``chgres_cube`` code is available from the `UFS_UTILS repository `_ on GitHub and can be used to convert GFS ICs to a different resolution or number of layers. Users may clone the develop/HEAD branch or the same version used by global-workflow develop (found in ``sorc/checkout.sh``). The ``chgres_cube`` code/scripts currently support the following GFS inputs: * pre-GFSv14 * GFSv14 * GFSv15 * GFSv16 -Clone UFS_UTILS:: +Users can use the copy of UFS_UTILS that is already cloned and built within their global-workflow clone or clone/build it separately: + +Within a build/linked global-workflow clone: + +:: + + cd sorc/ufs_utils.fd/util/gdas_init + +Clone and build separately: + +Clone UFS_UTILS: + +:: git clone --recursive https://github.com/NOAA-EMC/UFS_UTILS.git Then switch to a different tag or use the default branch (develop). -Build UFS_UTILS:: +Build UFS_UTILS: + +:: sh build_all.sh cd fix @@ -116,29 +130,33 @@ Build UFS_UTILS:: where ``$MACHINE`` is ``wcoss2``, ``hera``, ``jet``, or ``orion``. Note: UFS-UTILS builds on Orion but due to the lack of HPSS access on Orion the ``gdas_init`` utility is not supported there. -Configure your conversion:: +Configure your conversion: + +:: cd util/gdas_init vi config Read the doc block at the top of the config and adjust the variables to meet you needs (e.g. ``yy, mm, dd, hh`` for ``SDATE``). -Submit conversion script::` +Submit conversion script: + +:: ./driver.$MACHINE.sh where ``$MACHINE`` is currently ``wcoss2``, ``hera`` or ``jet``. Additional options will be available as support for other machines expands. Note: UFS-UTILS builds on Orion but due to lack of HPSS access there is no ``gdas_init`` driver for Orion nor support to pull initial conditions from HPSS for the ``gdas_init`` utility. -3 small jobs will be submitted: +Several small jobs will be submitted: - 1 jobs to pull inputs off HPSS - - 2 jobs to run ``chgres_cube`` (1 for deterministic/hires and 1 for each EnKF ensemble member) + - 1 or 2 jobs to run ``chgres_cube`` (1 for deterministic/hires and 1 for each EnKF ensemble member) The chgres jobs will have a dependency on the data-pull jobs and will wait to run until all data-pull jobs have completed. Check output: -In the config you will have defined an output folder called ``$OUTDIR``. The converted output will be found there, including the needed abias and radstat initial condition files. The files will be in the needed directory structure for the global-workflow system, therefore a user can move the contents of their ``$OUTDIR`` directly into their ``$ROTDIR/$COMROT``. +In the config you will have defined an output folder called ``$OUTDIR``. The converted output will be found there, including the needed abias and radstat initial condition files (if CDUMP=gdas). The files will be in the needed directory structure for the global-workflow system, therefore a user can move the contents of their ``$OUTDIR`` directly into their ``$ROTDIR/$COMROT``. Please report bugs to George Gayno (george.gayno@noaa.gov) and Kate Friedman (kate.friedman@noaa.gov). @@ -231,9 +249,9 @@ And then on all platforms:: What files should you pull for starting a new experiment with warm starts from production? ------------------------------------------------------------------------------------------ -That depends on what mode you want to run -- free-forecast or cycled. Whichever mode navigate to the top of your ``COMROT`` and pull the entirety of the tarball(s) listed below for your mode. The files within the tarball are already in the ``$CDUMP.$PDY/$CYC`` folder format expected by the system. +That depends on what mode you want to run -- forecast-only or cycled. Whichever mode, navigate to the top of your ``COMROT`` and pull the entirety of the tarball(s) listed below for your mode. The files within the tarball are already in the ``$CDUMP.$PDY/$CYC`` folder format expected by the system. -For free-forecast there are two tar balls to pull +For forecast-only there are two tar balls to pull 1. File #1 (for starting cycle SDATE):: /NCEPPROD/hpssprod/runhistory/rhYYYY/YYYYMM/YYYYMMDD/com_gfs_prod_gfs.YYYYMMDD_CC.gfs_restart.tar @@ -270,7 +288,7 @@ Recent pre-implementation parallel series was for GFS v16 (implemented March 202 * **Where are these tarballs?** See below for the location on HPSS for each v16 pre-implementation parallel. * **What tarballs do I need to grab for my experiment?** Tarballs from two cycles are required. The tarballs are listed below, where $CDATE is your starting cycle and $GDATE is one cycle prior. - - Free-forecast + - Forecast-only + ../$CDATE/gfs_restarta.tar + ../$GDATE/gdas_restartb.tar - Cycled w/EnKF diff --git a/docs/source/jobs.rst b/docs/source/jobs.rst index 249f305b07..ffe000e6a9 100644 --- a/docs/source/jobs.rst +++ b/docs/source/jobs.rst @@ -6,13 +6,13 @@ GFS Configuration Schematic flow chart for GFS v16 in operations -The sequence of jobs that are run for an end-to-end (DA+forecast+post processing+verification) GFS configuration using the Global Workflow is shown above. The system utilizes a collection of scripts that perform the tasks for each step. +The sequence of jobs that are run for an end-to-end (analysis+forecast+post processing+verification) GFS configuration using the Global Workflow is shown above. The system utilizes a collection of scripts that perform the tasks for each step. -For any cycle the system consists of two phases -- the gdas phase which provides the initial guess fields, and the gfs phase which creates the initial conditions and forecast of the system. As with the operational system, the gdas runs for each cycle (00, 06, 12, and 18 UTC), however, to save time and space in experiments, the gfs (right side of the diagram) is initially setup to run for only the 00 UTC cycle. (See the "run GFS this cycle?" portion of the diagram) The option to run the GFS for all four cycles is available (see gfs_cyc variable in configuration file). +For any cycle the system consists of two suites -- the "gdas" suite which provides the initial guess fields, and the "gfs" suite which creates the initial conditions and forecast of the system. As with the operational system, the gdas runs for each cycle (00, 06, 12, and 18 UTC), however, to save time and space in experiments, the gfs (right side of the diagram) is initially setup to run for only the 00 UTC cycle (See the "run GFS this cycle?" portion of the diagram). The option to run the GFS for all four cycles is available (see the ``gfs_cyc`` variable in configuration file). An experimental run is different from operations in the following ways: -* Workflow manager: operations utilizes `ecFlow `__, while development currently utilizes `ROCOTO `__. Note, experiments can also be run using ecFlow. +* Workflow manager: operations utilizes `ecFlow `__, while development currently utilizes `ROCOTO `__. Note, experiments can also be run using ecFlow on platforms with ecFlow servers established. * Dump step is not run as it has already been completed during the real-time production runs and dump data is available in the global dump archive on supported machines. @@ -28,7 +28,7 @@ Downstream jobs (e.g. awips, gempak, etc.) are not included in the diagram. Thos Jobs in the GFS Configuration ============================= +-------------------+-----------------------------------------------------------------------------------------------------------------------+ -| Job Name | Purpose | +| JOB NAME | PURPOSE | +-------------------+-----------------------------------------------------------------------------------------------------------------------+ | anal | Runs the analysis. 1) Runs the atmospheric analysis (global_gsi) to produce analysis increments; 2) Update surface | | | guess file via global_cycle to create surface analysis on tiles. | diff --git a/docs/source/monitor_rocoto.rst b/docs/source/monitor_rocoto.rst index d7126790b3..f6c820f832 100644 --- a/docs/source/monitor_rocoto.rst +++ b/docs/source/monitor_rocoto.rst @@ -11,39 +11,55 @@ Using command line You can use Rocoto commands with arguments to check the status of your experiment. -Start or continue a run:: +Start or continue a run: + +:: rocotorun -d /path/to/workflow/database/file -w /path/to/workflow/xml/file -Check the status of the workflow:: +Check the status of the workflow: + +:: rocotostat -d /path/to/workflow/database/file -w /path/to/workflow/xml/file [-c YYYYMMDDCCmm,[YYYYMMDDCCmm,...]] [-t taskname,[taskname,...]] [-s] [-T] Note: YYYYMMDDCCmm = YearMonthDayCycleMinute ...where mm/Minute is ’00’ for all cycles currently. -Check the status of a job:: +Check the status of a job: + +:: rocotocheck -d /path/to/workflow/database/file -w /path/to/workflow/xml/file -c YYYYMMDDCCmm -t taskname -Force a task to run (ignores dependencies - USE CAREFULLY!):: +Force a task to run (ignores dependencies - USE CAREFULLY!): + +:: rocotoboot -d /path/to/workflow/database/file -w /path/to/workflow/xml/file -c YYYYMMDDCCmm -t taskname -Rerun task(s):: +Rerun task(s): + +:: rocotorewind -d /path/to/workflow/database/file -w /path/to/workflow/xml/file -c YYYYMMDDCCmm -t taskname -Set a task to complete (overwrites current state):: + (If job is currently queued or running rocoto will kill the job. Run rocotorun afterwards to fire off rewound task.) + +Set a task to complete (overwrites current state): + +:: rocotocomplete -d /path/to/workflow/database/file -w /path/to/workflow/xml/file -c YYYYMMDDCCmm -t taskname +(Will not kill queued or running job, only update status.) + Several dates and task names may be specified in the same command by adding more -c and -t options. However, lists are not allowed. ^^^^^^^^^^^^^^^^^ Use ROCOTO viewer ^^^^^^^^^^^^^^^^^ -An alternative approach is to use A GUI that was designed to assist with monitoring global workflow experiments that use ROCOTO. It can be found under the ush/rocoto folder in global-workflow. +An alternative approach is to use a GUI that was designed to assist with monitoring global workflow experiments that use ROCOTO. It can be found under the ``workflow`` folder in global-workflow. ***** Usage diff --git a/docs/source/run.rst b/docs/source/run.rst index 9c5100dcaf..56728d3282 100644 --- a/docs/source/run.rst +++ b/docs/source/run.rst @@ -2,10 +2,9 @@ Run Global Workflow ################### -Here we will show how you can run an experiment using the Global Workflow. The Global workflow is regularly evolving and the underlying UFS-weather-model that it drives can run many different configurations. So this part of the document will be regularly updated. The workflow as it is configured today can be run as forecast only or cycled (forecast+Data Assimilation). Since cycled mode requires a number of Data Assimilation supporting repositories to be checked out, the instructions for the two modes from initial checkout stage will be slightly different. Apart from this there is a third mode that is rarely used in development mode and is primarily for operational use. This mode switches on specialized post processing needed by the avaiation industry. Since the files associated with this mode are restricted, only few users will have need and/or ability to run in this mode. +Here we will show how you can run an experiment using the Global Workflow. The Global workflow is regularly evolving and the underlying UFS-weather-model that it drives can run many different configurations. So this part of the document will be regularly updated. The workflow as it is configured today can be run as forecast only or cycled (forecast+Data Assimilation). Since cycled mode requires a number of Data Assimilation supporting repositories to be checked out, the instructions for the two modes from initial checkout stage will be slightly different. Apart from this there is a third mode that is rarely used in development mode and is primarily for operational use. This mode switches on specialized post processing needed by the aviation industry. Since the files associated with this mode are restricted, only select users will have need and/or ability to run in this mode. .. toctree:: - :hidden: clone.rst init.rst diff --git a/docs/source/setup.rst b/docs/source/setup.rst index 28ed697b0d..823ba4bb70 100644 --- a/docs/source/setup.rst +++ b/docs/source/setup.rst @@ -4,57 +4,51 @@ Experiment Setup Global workflow uses a set of scripts to help configure and set up the drivers (also referred to as Workflow Manager) that run the end-to-end system. While currently we use a `ROCOTO `__ based system and that is documented here, an `ecFlow `__ based systm is also under development and will be introduced to the Global Workflow when it is mature. To run the setup scripts, you need to make sure to have a copy of ``python3`` with ``numpy`` available. The easiest way to guarantee this is to load python from the `official hpc-stack installation `_ for the machine you are on: -Hera:: - - module use -a /contrib/anaconda/modulefiles - module load anaconda/anaconda3-5.3.1 - -Orion:: - - module load python/3.7.5 - -WCOSS2:: - - module load python/3.8.6 - -S4:: - - module load miniconda/3.8-s4 ++------------+----------------------------------------------------------+ +| MACHINE | PYTHON MODULE LOAD COMMAND(S) | ++------------+----------------------------------------------------------+ +| Hera | ``module use -a /contrib/anaconda/modulefiles`` | +| | ``module load anaconda/anaconda3-5.3.1`` | ++------------+----------------------------------------------------------+ +| Orion | ``module load python/3.7.5`` | ++------------+----------------------------------------------------------+ +| WCOSS2 | ``module load python/3.8.6`` | ++------------+----------------------------------------------------------+ +| S4 | ``module load miniconda/3.8-s4`` | ++------------+----------------------------------------------------------+ If running with Rocoto make sure to have a Rocoto module loaded before running setup scripts: -Hera:: - - module load rocoto/1.3.3 - -Orion:: - - module load contrib - module load rocoto/1.3.3 - -WCOSS2:: - - module use /apps/ops/test/nco/modulefiles/ - module load core/rocoto/1.3.5 - -S4:: - - module load rocoto/1.3.4 ++------------+----------------------------------------------------------+ +| MACHINE | ROCOTO MODULE LOAD COMMAND(S) | ++------------+----------------------------------------------------------+ +| Hera | ``module load rocoto/1.3.3`` | ++------------+----------------------------------------------------------+ +| Orion | ``module load contrib`` | +| | ``module load rocoto/1.3.3`` | ++------------+----------------------------------------------------------+ +| WCOSS2 | ``module use /apps/ops/test/nco/modulefiles/`` | +| | ``module load core/rocoto/1.3.5`` | ++------------+----------------------------------------------------------+ +| S4 | ``module load rocoto/1.3.4`` | ++------------+----------------------------------------------------------+ ^^^^^^^^^^^^^^^^^^^^^^^^ -Free-forecast experiment +Forecast-only experiment ^^^^^^^^^^^^^^^^^^^^^^^^ Scripts that will be used: - * workflow/setup_expt.py - * workflow/setup_xml.py + * ``workflow/setup_expt.py`` + * ``workflow/setup_xml.py`` *************************************** Step 1: Run experiment generator script *************************************** -The following command examples include variables for reference but users should not use environmental variables but explicit values to submit the commands. Exporting variables like EXPDIR to your environment causes an error when the python scripts run. Please explicitly include the argument inputs when running both setup scripts:: +The following command examples include variables for reference but users should not use environmental variables but explicit values to submit the commands. Exporting variables like EXPDIR to your environment causes an error when the python scripts run. Please explicitly include the argument inputs when running both setup scripts: + +:: cd workflow ./setup_expt.py forecast-only --idate $IDATE --edate $EDATE [--app $APP] [--start $START] [--gfs_cyc $GFS_CYC] [--resdet $RESDET] @@ -63,7 +57,7 @@ The following command examples include variables for reference but users should where: * ``forecast-only`` is the first positional argument that instructs the setup script to produce an experiment directory for forecast only experiments. - * $APP is the target application, one of: + * ``$APP`` is the target application, one of: - ATM: atmosphere-only [default] - ATMW: atm-wave @@ -72,33 +66,39 @@ where: - S2SW: atm-ocean-ice-wave - S2SWA: atm-ocean-ice-wave-aerosols - * $START is the start type (warm or cold [default]) - * $IDATE is the initial start date of your run (first cycle CDATE, YYYYMMDDCC) - * $EDATE is the ending date of your run (YYYYMMDDCC) and is the last cycle that will complete - * $PSLOT is the name of your experiment [default: test] - * $CONFIGDIR is the path to the /config folder under the copy of the system you're using [default: $TOP_OF_CLONE/parm/config/] - * $RESDET is the FV3 resolution (i.e. 768 for C768) [default: 384] - * $GFS_CYC is the forecast frequency (0 = none, 1 = 00z only [default], 2 = 00z & 12z, 4 = all cycles) - * $COMROT is the path to your experiment output directory. DO NOT include PSLOT folder at end of path, it’ll be built for you. [default: $HOME] - * $EXPDIR is the path to your experiment directory where your configs will be placed and where you will find your workflow monitoring files (i.e. rocoto database and xml file). DO NOT include PSLOT folder at end of path, it will be built for you. [default: $HOME] - * $ICSDIR is the path to the initial conditions. This is handled differently depending on whether $APP is S2S or not. + * ``$START`` is the start type (warm or cold [default]) + * ``$IDATE`` is the initial start date of your run (first cycle CDATE, YYYYMMDDCC) + * ``$EDATE`` is the ending date of your run (YYYYMMDDCC) and is the last cycle that will complete + * ``$PSLOT`` is the name of your experiment [default: test] + * ``$CONFIGDIR`` is the path to the ``/config`` folder under the copy of the system you're using [default: $TOP_OF_CLONE/parm/config/] + * ``$RESDET`` is the FV3 resolution (i.e. 768 for C768) [default: 384] + * ``$GFS_CYC`` is the forecast frequency (0 = none, 1 = 00z only [default], 2 = 00z & 12z, 4 = all cycles) + * ``$COMROT`` is the path to your experiment output directory. DO NOT include PSLOT folder at end of path, it’ll be built for you. [default: $HOME (but do not use default due to limited space in home directories normally, provide a path to a larger scratch space)] + * ``$EXPDIR`` is the path to your experiment directory where your configs will be placed and where you will find your workflow monitoring files (i.e. rocoto database and xml file). DO NOT include PSLOT folder at end of path, it will be built for you. [default: $HOME] + * ``$ICSDIR`` is the path to the initial conditions. This is handled differently depending on whether ``$APP`` is S2S or not. - - If $APP is ATM or ATMW, this setting is currently ignored - - If $APP is S2S or S2SW, ICs are copied from the central location to this location and the argument is required + - If ``$APP`` is ATM or ATMW, this setting is currently ignored + - If ``$APP`` is S2S or S2SW, ICs are copied from the central location to this location and the argument is required Examples: -Atm-only:: +Atm-only: + +:: cd workflow ./setup_expt.py forecast-only --pslot test --idate 2020010100 --edate 2020010118 --resdet 384 --gfs_cyc 4 --comrot /some_large_disk_area/Joe.Schmo/comrot --expdir /some_safe_disk_area/Joe.Schmo/expdir -Coupled:: +Coupled: + +:: cd workflow ./setup_expt.py forecast-only --app S2SW --pslot coupled_test --idate 2013040100 --edate 2013040100 --resdet 384 --comrot /some_large_disk_area/Joe.Schmo/comrot --expdir /some_safe_disk_area/Joe.Schmo/expdir --icsdir /some_large_disk_area/Joe.Schmo/icsdir -Coupled with aerosols:: +Coupled with aerosols: + +:: cd workflow ./setup_expt.py forecast-only --app S2SWA --pslot coupled_test --idate 2013040100 --edate 2013040100 --resdet 384 --comrot /some_large_disk_area/Joe.Schmo/comrot --expdir /some_safe_disk_area/Joe.Schmo/expdir --icsdir /some_large_disk_area/Joe.Schmo/icsdir @@ -118,26 +118,23 @@ Go to your EXPDIR and check/change the following variables within your config.ba * HPSS_PROJECT (project on HPSS if archiving) * ATARDIR (location on HPSS if archiving) -If you are using cycling, also change these: - - * imp_physics from 8 (Thompson) to 11 (GFDL) - * CCPP_SUITE to FV3_GFS_v16 (or another suite that uses GFDL) [#]_ - -.. [#] This is a temporary measure until cycling mode works with Thompson - Some of those variables will be found within a machine-specific if-block so make sure to change the correct ones for the machine you'll be running on. -Now is also the time to change any other variables/settings you wish to change in config.base or other configs. `Do that now.` Once done making changes to the configs in your EXPDIR go back to your clone to run the second setup script. See :doc: configure.rst for more information on configuring your run. +Now is also the time to change any other variables/settings you wish to change in config.base or other configs. `Do that now.` Once done making changes to the configs in your EXPDIR go back to your clone to run the second setup script. See :doc:configure.rst for more information on configuring your run. ************************************* Step 3: Run workflow generator script ************************************* -This step sets up the files needed by the Workflow Manager/Driver. At this moment only ROCOTO configurations are generated:: +This step sets up the files needed by the Workflow Manager/Driver. At this moment only ROCOTO configurations are generated: + +:: ./setup_xml.py $EXPDIR/$PSLOT -Example:: +Example: + +:: ./setup_xml.py /some_safe_disk_area/Joe.Schmo/expdir/test @@ -153,14 +150,16 @@ Cycled experiment Scripts that will be used: - * workflow/setup_expt.py - * workflow/setup_xml.py + * ``workflow/setup_expt.py`` + * ``workflow/setup_xml.py`` *************************************** Step 1) Run experiment generator script *************************************** -The following command examples include variables for reference but users should not use environmental variables but explicit values to submit the commands. Exporting variables like EXPDIR to your environment causes an error when the python scripts run. Please explicitly include the argument inputs when running both setup scripts:: +The following command examples include variables for reference but users should not use environmental variables but explicit values to submit the commands. Exporting variables like EXPDIR to your environment causes an error when the python scripts run. Please explicitly include the argument inputs when running both setup scripts: + +:: cd workflow ./setup_expt.py cycled --idate $IDATE --edate $EDATE [--app $APP] [--start $START] [--gfs_cyc $GFS_CYC] @@ -170,33 +169,37 @@ The following command examples include variables for reference but users should where: * ``cycled`` is the first positional argument that instructs the setup script to produce an experiment directory for cycled experiments. - * $APP is the target application, one of[#]_: + * ``$APP`` is the target application, one of: - ATM: atmosphere-only [default] - ATMW: atm-wave - * $IDATE is the initial start date of your run (first cycle CDATE, YYYYMMDDCC) - * $EDATE is the ending date of your run (YYYYMMDDCC) and is the last cycle that will complete - * $START is the start type (warm or cold [default]) - * $GFS_CYC is the forecast frequency (0 = none, 1 = 00z only [default], 2 = 00z & 12z, 4 = all cycles) - * $RESDET is the FV3 resolution of the deterministic forecast [default: 384] - * $RESENS is the FV3 resolution of the ensemble (EnKF) forecast [default: 192] - * $NENS is the number of ensemble members [default: 20] - * $CDUMP is the starting phase [default: gdas] - * $PSLOT is the name of your experiment [default: test] - * $CONFIGDIR is the path to the /config folder under the copy of the system you're using [default: $TOP_OF_CLONE/parm/config/] - * $COMROT is the path to your experiment output directory. DO NOT include PSLOT folder at end of path, it’ll be built for you. [default: $HOME] - * $EXPDIR is the path to your experiment directory where your configs will be placed and where you will find your workflow monitoring files (i.e. rocoto database and xml file). DO NOT include PSLOT folder at end of path, it will be built for you. [default: $HOME] - * $ICSDIR is the path to the ICs for your run if generated separately. [default: None] + * ``$IDATE`` is the initial start date of your run (first cycle CDATE, YYYYMMDDCC) + * ``$EDATE`` is the ending date of your run (YYYYMMDDCC) and is the last cycle that will complete + * ``$START`` is the start type (warm or cold [default]) + * ``$GFS_CYC`` is the forecast frequency (0 = none, 1 = 00z only [default], 2 = 00z & 12z, 4 = all cycles) + * ``$RESDET`` is the FV3 resolution of the deterministic forecast [default: 384] + * ``$RESENS`` is the FV3 resolution of the ensemble (EnKF) forecast [default: 192] + * ``$NENS`` is the number of ensemble members [default: 20] + * ``$CDUMP`` is the starting phase [default: gdas] + * ``$PSLOT`` is the name of your experiment [default: test] + * ``$CONFIGDIR`` is the path to the config folder under the copy of the system you're using [default: $TOP_OF_CLONE/parm/config/] + * ``$COMROT`` is the path to your experiment output directory. DO NOT include PSLOT folder at end of path, it’ll be built for you. [default: $HOME] + * ``$EXPDIR`` is the path to your experiment directory where your configs will be placed and where you will find your workflow monitoring files (i.e. rocoto database and xml file). DO NOT include PSLOT folder at end of path, it will be built for you. [default: $HOME] + * ``$ICSDIR`` is the path to the ICs for your run if generated separately. [default: None] .. [#] More Coupled configurations in cycled mode are currently under development and not yet available -Example:: +Example: + +:: cd workflow ./setup_expt.py cycled --pslot test --configdir /home/Joe.Schmo/git/global-workflow/parm/config --idate 2020010100 --edate 2020010118 --comrot /some_large_disk_area/Joe.Schmo/comrot --expdir /some_safe_disk_area/Joe.Schmo/expdir --resdet 384 --resens 192 --nens 80 --gfs_cyc 4 -Example setup_expt.py on WCOSS_C:: +Example ``setup_expt.py`` on WCOSS_C: + +:: SURGE-slogin1 > ./setup_expt.py cycled --pslot fv3demo --idate 2017073118 --edate 2017080106 --comrot /gpfs/hps2/ptmp/Joe.Schmo --expdir /gpfs/hps3/emc/global/noscrub/Joe.Schmo/para_gfs SDATE = 2017-07-31 18:00:00 @@ -208,7 +211,9 @@ Example setup_expt.py on WCOSS_C:: The message about the config.base.default is telling you that you are free to delete it if you wish but it’s not necessary to remove. Your resulting config.base was generated from config.base.default and the default one is there for your information. -What happens if I run setup_expt.py again for an experiment that already exists?:: +What happens if I run ``setup_expt.py`` again for an experiment that already exists? + +:: SURGE-slogin1 > ./setup_expt.py forecast-only --pslot fv3demo --idate 2017073118 --edate 2017080106 --comrot /gpfs/hps2/ptmp/Joe.Schmo --expdir /gpfs/hps3/emc/global/noscrub/Joe.Schmo/para_gfs @@ -222,7 +227,7 @@ What happens if I run setup_expt.py again for an experiment that already exists? DEFAULT: /gpfs/hps3/emc/global/noscrub/Joe.Schmo/para_gfs/fv3demo/config.base.default is for reference only. Please verify and delete the default file before proceeding. -Your COMROT and EXPDIR will be deleted and remade. Be careful with this! +Your ``COMROT`` and ``EXPDIR`` will be deleted and remade. Be careful with this! **************************************** Step 2: Set user and experiment settings @@ -248,11 +253,15 @@ Now is also the time to change any other variables/settings you wish to change i Step 3: Run workflow generator script ************************************* -This step sets up the files needed by the Workflow Manager/Driver. At this moment only ROCOTO configurations are generated:: +This step sets up the files needed by the Workflow Manager/Driver. At this moment only ROCOTO configurations are generated: + +:: ./setup_xml.py $EXPDIR/$PSLOT -Example:: +Example: + +:: ./setup_xml.py /some_safe_disk_area/Joe.Schmo/expdir/test @@ -260,5 +269,5 @@ Example:: Step 4: Confirm files from setup scripts **************************************** -You will now have a rocoto xml file in your EXPDIR ($PSLOT.xml) and a crontab file generated for your use. Rocoto uses CRON as the scheduler. If you do not have a crontab file you may not have had the rocoto module loaded. To fix this load a rocoto module and then rerun setup_xml.py script again. Follow directions for setting up the rocoto cron on the platform the experiment is going to run on. +You will now have a rocoto xml file in your EXPDIR ($PSLOT.xml) and a crontab file generated for your use. Rocoto uses CRON as the scheduler. If you do not have a crontab file you may not have had the rocoto module loaded. To fix this load a rocoto module and then rerun ``setup_xml.py`` script again. Follow directions for setting up the rocoto cron on the platform the experiment is going to run on. diff --git a/docs/source/start.rst b/docs/source/start.rst index 7775d2db0c..1a59e59838 100644 --- a/docs/source/start.rst +++ b/docs/source/start.rst @@ -34,7 +34,7 @@ or crontab $PSLOT.crontab -**WARNING: ``crontab $PSLOT.crontab`` command will overwrite existing crontab file on your login node. If running multiple crons recommend editing crontab file with ``crontab -e`` command.** +** WARNING: ``crontab $PSLOT.crontab`` command will overwrite existing crontab file on your login node. If running multiple crons recommend editing crontab file with ``crontab -e`` command. ** Check your crontab settings:: diff --git a/docs/source/view.rst b/docs/source/view.rst index 744b4525b1..3093755e9a 100644 --- a/docs/source/view.rst +++ b/docs/source/view.rst @@ -5,7 +5,7 @@ View Experiment output The output from your run will be found in the ``COMROT/ROTDIR`` you established. This is also where you placed your initial conditions. Within your ``COMROT`` you will have the following directory structure (based on the type of experiment you run): ^^^^^^^^^^^^^ -Free Forecast +Forecast-only ^^^^^^^^^^^^^ :: @@ -21,7 +21,7 @@ Cycled :: - enkfgdas.YYYYMMDD/CC/atmos/mem###/ <- contains EnKF inputs/outputs for each cycle and each member + enkfgdas.YYYYMMDD/CC/mem###/atmos <- contains EnKF inputs/outputs for each cycle and each member gdas.YYYYMMDD/CC/atmos <- contains deterministic gdas inputs/outputs (atmosphere) gdas.YYYYMMDD/CC/wave <- contains deterministic gdas inputs/outputs (wave) gfs.YYYYMMDD/CC/atmos <- contains deterministic long forecast gfs inputs/outputs (atmosphere) @@ -29,7 +29,9 @@ Cycled logs/ <- logs for each cycle in the run vrfyarch/ <- contains files related to verification and archival -Here is an example ``COMROT`` for a cycled run as it may look several cycles in (note the archival steps remove older cycle folders as the run progresses):: +Here is an example ``COMROT`` for a cycled run as it may look several cycles in (note the archival steps remove older cycle folders as the run progresses): + +:: -bash-4.2$ ll /scratch1/NCEPDEV/stmp4/Joe.Schmo/comrot/testcyc192 total 88