Getting started with uDALES to set up your own experiments is straightforward. This guide goes through the steps required to install uDALES, and set-up and run a simple example. Results are output in netCDF format, for a quick inspection you can use GUI tools such as Panoply or ncview. To learn more about pre- and post-processing steps see the what's next section.
If you have Singularity available on your system, you can use the provided scripts under tools/singularity
to build and run uDALES cases locally or on HPC environments. See Singularity for instructions; otherwise, see the next section.
uDALES is supported to run on Linux, macOS and Windows Subsystem for Linux (WSL). Please ensure that the latest version of the following libraries and software are available on your system:
- CMake >= 3.9.
- NetCDF-Fortran >= 4.
- GNU >= 9, Intel, or Cray Fortran compiler.
- A recent version of MPICH or Open-MPI.
- FFTW
To set up a project template for uDALES with a generic folder structure that you can later use to set up your own experiments, you will need:
When you create your own experiments, you will need to set up specific input files. We have a system in place that does that for you, written in MATLAB. Information can be found under pre-processing and is not discussed in the getting-started set-up.
- MATLAB >= R2017b
For better organised netCDF output files, you will need:
- netCDF Operators (NCO).
On local systems, these software and libraries (except MATLAB) should be available from your system's package manager (e.g. APT, yum, Homebrew, etc.) and examples on how to install all the required libraries for Linux/WSL and macOS are given below.
On high performance computing (HPC) clusters, these software and libraries should have already been installed. Please refer to the specific documentation to load the above software and libraries. Alternatively, you can install all the required packages easily after installing Linuxbrew and using the instructions for macOS.
sudo apt-get update && sudo apt-get upgrade -y
sudo apt-get install -y git cmake gfortran libomp-dev libopenmpi-dev openmpi-bin libnetcdf-dev libnetcdff-dev nco python3 python3-pip libfftw3-dev
On macOS, use Homebrew to install the required libraries. If you do not have Homebrew installed on your system, install it from the Homebrew installation page then, to install all the required dependencies, including support for MPI, run the following commands from your terminal prompt:
brew update
brew install git cmake gcc netcdf netcdf-fortran mpich nco python3 fftw
Create a top-level directory, for example called "uDALES":
mkdir uDALES
Clone the u-dales repository into the top-level directory:
cd uDALES
git clone --recurse-submodules https://github.com/uDALES/u-dales.git
Create directories for experiment set-ups and output data:
mkdir experiments outputs
such that your directory tree resembles the following:
.
myproject
│
├── experiments # Configuration files grouped by experiment number.
│ └── <N> # Any configurations files needed by uDALES to run experiment <N> a three digit integer number.
│ └── ...
│
├── outputs # Additional or specialized tools other then the ones included with uDALES.
│ └── <N> # Output from experiment <N>.
│ └── ...
│
└── u-dales # uDALES model development repository (submodule).
│ └── 2decomp-fft
│ └── ...
│ └── src
│ └── ...
│ └── tools
│ └── ...
In the next steps we will assume your current working directory is the top-level project directory.
To compile uDALES (in release mode) on common/local uvuntu or mac systems using helper script, run:
# We assume you are running the following commands from the u-dales directory
tools/build_executable.sh common release
OR, you can do it manually. On standard systems and configurations, you can build uDALES with the following commands:
# We assume you are running the following commands from your
# top-level project directory.
mkdir -p u-dales/build/release # in case you want to later create a build/debug
pushd u-dales/build/release
cmake -LA ../..
make
popd
You can compile in parallel mode by passing Make the j
flag followed by the number of CPU cores to use. For example, to compile with 2 cores do make -j2
.
To compile uDALES (in release mode) on the ICL HPC cluster run:
# We assume you are running the following commands from the u-dales directory
tools/build_executable.sh icl release
To compile uDALES (in release mode) on ARCHER2, use:
# We assume you are running the following commands from the u-dales directory
tools/build_executable.sh archer release
Information for developers: if you are a High Performance Cluster (HPC) user you are likely using the Environment Modules package for the dynamic modification of the user's environment via modulefiles and therefore you may need to hint CMake the PATH to netCDF (see below how).
Here we show how to compile uDALES using the HPC at ICL as an example, therefore please note that the specific names/versions installed on your system may be different.
module list # list currently enabled modules -- should be empty!
module avail # list available modules
# This is an example, please check with the previous command for the exact name of the
# modules available on your system. This will load netCDF compiled with Intel Suite
# 2020.2 and add the correct version of icc and ifort to the PATH.
module load intel-suite/2020.2 mpi/intel-2019.8.254 cmake/3.18.2 git/2.14.3
Then, to build the uDALES executable, run the following commands:
# We assume you are running the following commands from your
# top-level project directory.
mkdir -p u-dales/build/release
pushd u-dales/build/release
FC=mpiifort cmake -DNETCDF_DIR=/apps/netcdf/4.4.1-c -DNETCDF_FORTRAN_DIR=/apps/netcdf/4.4.4-fortran -LA ../..
make
popd
where NETCDF_DIR
and NETCDF_FORTRAN_DIR
indicates the absolute path to your netCDF-C and netCDF-Fortran installation directories. Here, we use the utilities nc-config
and nf-config
to hint CMake the location of netCDF, but you can simply pass the absolute path to the netCDF-C and netCDF-Fortran manually instead. You can compile in parallel mode by passing Make the j
flag followed by the number of CPU cores to use. For example, to compile with 2 cores do make -j2
.
By default uDALES will compile in Release
mode. You can change this by specifying the option (or flag) at configure time. The general syntax for specifying an option in CMake is -D<flag_name>=<flag_value>
where <flag_name>
is the option/flag name and <flag_value>
is the option/flag value. The following options can be specified when configuring uDALES:
Name | Options | Default | Description |
---|---|---|---|
CMAKE_BUILD_TYPE |
Release , Debug |
Release |
Whether to optimise/build with debug flags |
NETCDF4_DIR |
<path> |
- | Path to netCDF-C installation directory |
NETCDF_FORTRAN_DIR |
<path> |
- | Path to netCDF-Fortran installation directory |
SKIP_UPDATE_EXTERNAL_PROJECTS |
ON , OFF |
OFF |
Whether to skip updating external projects |
To set up a new simulation, copy_inputs.sh
in u-dales/tools/
is used to create a new simulation setup new_exp_id
based on another simulation old_exp_id
. All exp_ids
are three digit integer numbers, e.g. 001, and are stored in directories of that name. Each experiment case directory must contain a config.sh file where appropriate paths for DA_EXPDIR (experiments directory), DA_WORKDIR (outputs directory), DA_TOOLSDIR (u-dales/tools directory) are set using export.
Now to set-up a new experiment (here we use case 009
) based on a previous example (here we use case 001
), run:
# We assume you are running the following commands from your
# top-level project directory.
# General syntax: copy_inputs.sh old_exp_id new_exp_id
u-dales/tools/copy_inputs.sh experiments/001 009
# To set up a new simulation starting from the restart files of another simulation
# ("warmstart"), use the 'w' flag. E.g.: copy_inputs.sh old_exp_id new_exp_id w
u-dales/tools/copy_inputs.sh experiments/001 009 w
The scripts local_execute.sh
(for local machines), hpc_execute.sh
(for ICL cluster) and archer_execute.sh
(for ARCHER2) in u-dales/tools
are used as wrappers to run simulations. These scripts contain several helpers to run the simulations and merge outputs from several CPUs into a single file (see Post-processing for more info about the individual scripts).
The scripts require several variables to be set up. Below is an example setup for copying and pasting. You can also specify these parameters in a config.sh
file within the experiment directory, which is then read by the scripts.
Note that you need to choose the number of CPUs you are using to run the simulation such that the number of grid cells in the y-direction (jtot
parameter in the namoptions
input file) is a multiple of the number of CPUs.
# We assume you are running the following commands from your
# top-level project directory.
export DA_TOOLSDIR=$(pwd)/u-dales/tools # Directory of scripts
export DA_BUILD=$(pwd)/u-dales/build/release/u-dales # Build file
export NCPU=2 # Number of CPUs to use for a simulation
export DA_WORKDIR=$(pwd)/outputs # Output top-level directory
Then, to start the simulation, run:
# We assume you are running the following commands from your
# top-level project directory.
# General syntax: local_execute.sh exp_directory
./u-dales/tools/local_execute.sh experiments/009
export DA_TOOLSDIR=$(pwd)/u-dales/tools # Directory of scripts
export DA_BUILD=$(pwd)/u-dales/build/release/u-dales # Build file
export NCPU=24 # Number of CPUs to use for a simulation
export NNODE=1 # Number of nodes to use for a simulation
export WALLTIME="00:30:00" # Maximum runtime for simulation in hours:minutes:seconds
export MEM="128gb" # Memory request per node
For guidance on how to set the parameters on HPC, have a look at Job sizing guidance. Then, to start the simulation, run:
# We assume you are running the following commands from your
# top-level project directory.
# General syntax: hpc_execute.sh exp_directory
./u-dales/tools/hpc_execute.sh experiments/009
export DA_TOOLSDIR=$(pwd)/u-dales/tools # Directory of scripts
export DA_BUILD=$(pwd)/u-dales/build/release/u-dales # Build file
export NCPU=128 # Number of CPUs to use for a simulation
export NNODE=1 # Number of nodes to use for a simulation
export WALLTIME="24:00:00" # Maximum runtime for simulation in hours:minutes:seconds
export MEM="256gb" # Memory request per node
export QOS="standard" # Queue
For guidance on how to set the parameters on ARCHER2, have a look at the ARCHER2 documentation. In particular, take care to edit the archer_execute.sh
script so that the account corresponds to one you can use.
Then, to start the simulation, run:
# We assume you are running the following commands from your
# top-level project directory.
# General syntax: hpc_execute.sh exp_directory
bash ./u-dales/tools/archer_execute.sh experiments/009
If you are looking for information on how to install or use Singularity on your system, please refer to the Singularity documentation . The use of Singularity is undoubtedly the easiest way to build and run cases in uDALES as all dependencies are provided and uDALES will compile out of the box. Furthermore, users wishing to achieve a reasonable level of scientific reproducibility may archive software, tools, and data with their Singularity image containing OS and external libraries to an open access repository (e.g. Meyer et al., 2020).
First clone the uDALES repository with:
git clone https://github.com/uDALES/u-dales.git
Then, to build and download the Singularity image use:
singularity build --remote tools/singularity/image.sif tools/singularity/image.def
then, to install uDALES use:
# udales_build.sh <NPROC> [Debug, Release]
./tools/singularity/udales_build.sh 2 Release
Finally, to run an example case use:
# udales_run.sh <NPROC> <BUILD_TYPE> <PATH_TO_CASE> <NAMELIST>
./tools/singularity/udales_run.sh 2 Release examples/001 namoptions.001
If you are looking to run the build and run commands on HPC, we have provided a sample script under tools/singularity/udales_pbs_submit.sh
, you can modify and run it with qsub tools/singularity/udales_pbs_submit.sh
.
This simple guide is meant to allow you to get started based on an existing example. We set up several example simulations to get you started. To learn more about pre- and post-processing steps in uDALES, please see the pre-processing and post-processing pages.