Skip to content

Updating and testing RAVEN

Diego Mandelli edited this page Nov 28, 2020 · 6 revisions

RAVEN frequently updates with new features and optimizations. There are several options to keep up with RAVEN developments.

  • For users who want a reliable experience with minimal day-to-day changes, we recommend checking out the latest release version of RAVEN and updating to a new tagged version when desired features are released.
  • For users who want cutting-edge features as soon as they are made available, we recommend checking out the devel branch and updating frequently.

Regardless of which update path you choose, the following procedure should be followed when updating RAVEN.

Updating RAVEN

To update RAVEN, follow these steps, shown from within the root raven/ directory, replacing "devel" with the desired RAVEN branch (a tagged version, or "devel", shown using "devel" as an example):

Download latest changes

git checkout devel
git pull
git submodule update

Install latest changes

scripts/establish_conda_env.sh --install
./build_raven

Note that this assumes RAVEN has already been installed with the appropriate submodules (see Installation).

Note also that this update will use the conda definitions and RAVEN library environment names defined when you first installed RAVEN. These can be changed by following the instructions on the Installation page.

Troubleshooting

Here are some issues that can be seen when updating and ways to resolve them.

Local updates to RAVEN files

If, during git checkout or git pull, you are told that local changes would be overwritten by the action, then those local changes need to be removed before the update can be performed. If you want to save local changes, they should be placed in a different Git branch, not on the protected "devel" or tagged branches.

To remove local changes to a file, navigate to the location of the file and run git checkout on that filename. For example, if I made local changes to raven/framework/raven_qsub_command.sh that need to be reverted, I can do the following, starting from the root raven/ directory:

cd framework
git checkout raven_qsub_command.sh

After this, I can try the git checkout or git pull command again.

build_raven fails

Occasionally, when running build_raven, it will fail for one of a variety of reasons. Often this can be solved by running ./clean_raven from the root raven/ directory, then running ./build_raven again.

Testing

To test a RAVEN installation, navigate to the root raven/ directory and run

./run_tests -j2

replacing 2 with the number of desired parallel tests to run simultaneously. We strongly recommend against using all your cores, since some tests also run parallel sampling and overloading cores can greatly slow down the testing process.

If it is need to test RAVEN against the heavy tests, the following command will run the heavy tests exclusively:

./run_tests --heavy

Interpreting Test Results

At the end of testing, three numbers will be reported: passed, skipped, and failing. We hope to have as many tests in the "passed" column as possible.

Tests are classified as "skipped" for a couple possible reasons. The RAVEN development team may have disabled a test due to a change in functionality or inconsistent behavior while we investigate the test. Alternatively, some tests rely on specific optional libraries, operating systems, or codes installed, and these tests are skipped if one of the prerequisites is not met. No base-level requirement tests are allowed to be skipped, so in general skipped tests should not be a concern.

Failing tests are always a cause for concern, but there can be some common-cause failures that present themselves. For example, at the time of this writing roughly 70 tests rely on a display environment that produces plots for testing. If the display environment is not set up in a way RAVEN can use it, these tests may all fail because of that. Whenever tests are failing, the best course of action is to send the full output log of the test runs to the RAVEN user list and some details about the system you're running on so we can better understand the failures.