Skip to content

Object-oriented Utilitarian Functionality for Large-scale Physics Simulations


Notifications You must be signed in to change notification settings


Folders and files

Last commit message
Last commit date

Latest commit



8 Commits

Repository files navigation


A machine-specific Makefile is needed to build GenASiS programs. Several 
sample Makefiles are provided under the subdirectory Build/Machines. 
Minor modifications of one of the provided Makefiles that most
approximates one's computing environment is often sufficient to get started. 
The essential information needed includes the name of the compiler wrapper
to compile MPI-based code (e.g. commonly 'mpif90' for Fortran), 
compiler-specific flags for various debugging and optimization options, and
the flags and locations to include and link with third-party libraries such
as Silo. 

Once the machine-specific Makefile is set up, the environment variable 
GENASIS_MACHINE has to be set to tell the GenASiS build system to use the 
corresponding Makefile. For example, to use the Makefile for the GCC compiler
on a Linux machine (i.e. Makefile_Linux_GCC), in a Bash Unix shell one can


In most common computing environments with a generic MPI library, the
fluid dynamics example programs (described in the accompanying paper) can then
be built and executed (here with 8 MPI processes) with the following commands:

> cd Programs/Examples/Basics/FluidDynamics/Executables
> mpirun -np 8 ./SineWaveAdvection_Linux_GCC nCells=128,128,128
> mpirun -np 8 ./SawtoothWaveAdvection_Linux_GCC nCells=128,128,128 \
> mpirun -np 8 ./RiemannProblem_Linux_GCC nCells=128,128,128 \

(To compile in a manner that is unoptimized but useful for debuggers, 
replace 'PURPOSE=OPTIMIZE' with 'PURPOSE=DEBUG'. Or omit it altogether; 
in the absence of a specification of PURPOSE, the Makefile in 
FluidDynamics/Executables sets PURPOSE=DEBUG as a default.)
Note that in these examples, the optional non-default parameter values for
nCells, nWavelengths, and FinishTime---which were used in
generating the figures in the accompanying paper---are passed
to the programs in this case via command-line options. The 1D and 2D cases
of these programs can also be executed by specifying fewer elements for 
nCells, for example

> mpirun -np 2 ./RiemannProblem_Linux_GCC nCells=128 Dimensionality=1D \
> mpirun -np 4 ./RiemannProblem_Linux_GCC nCells=128,128 Dimensionality=2D \

where the 'Dimensionality' option here is only used as an appendix to the 
name of the output file (it should be consistent with the number of 
elements given to nCells, which the program uses to determine the 
desired dimensionality of the mesh).
By default the output files are written in the directory "Output"
that resides on the same level as the "Executables" directory, but
this can be changed with an optional 'OutputDirectory' command line

If the VisIt visualization package is available, plots similar to the Figures 
in the accompanying paper can be generated using the supplied visualization
script called from the "Output" directory. The script takes one argument, 
which is the program name appended with the "Dimensionality" string. Assuming 
the executable "visit" is available, the visualization script can be called, 
for example, as the follows:

> cd Programs/Examples/Basics/FluidDynamics/Output
> visit -cli -s ../ SineWaveAdvection_3D
> visit -cli -s ../ SawtoothWaveAdvection_2D
> visit -cli -s ../ RiemannProblem_1D

The molecular dynamics programs described in the accompanying paper can be 
built and executed in a manner similar to those in FluidDynamics. The 
directory MolecularDynamics is also found under Programs/Examples/Basics.
A blanket "make all" command in the Executables subdirectory compiles both 
ArgonEquilibrium and ClusterFormation}. For both programs, all results 
presented in the accompanying paper were obtained with parameters 
nSteps=10000 and nWrite=1000. Various numbers of particles and processes 
used for different runs are mentioned in the accompanying paper. In the case 
of ClusterFormation, the number of particles is directly specified by a 
parameter nParticles. For ArgonEquilibrium a parameter nUnitCellsRoot is 
used instead; the number of particles is 4 * ( nUnitCellsRoot ** 3 ). Thus 
the values 8, 12, 16, and 20 for nUnitCellsRoot correspond to 2048, 6912,
16384, and 32000 particles respectively. Specification of the number density
and temperature parameters for different phases of argon is discussed in the
accompanying paper.

Unit test programs exercising individual GenASIS classes can similarly be
built and executed inside the "Executables" directory of each leaf
division of the code under "Programs/UnitTests". For example, the following 
commands build and execute the unit test programs for classes in the 
"Runtime" division:

> cd Programs/UnitTests/Basics/Runtime/Executables
> make all
> mpirun -np 1 [program_name]

This blanket "make all" builds all the unit test targets in the Makefile
fragment Programs/Basics/Runtime/Makefile_Runtime. Individual targets of
course also can be built.

GenASiS Basics supports the use of GPUs where available using the OpenMP 
target directive for GPU offload available since OpenMP 4.5 specification. 
In the included Makefile, where support for OpenMP offload is known to work 
well (e.g. with the IBM XL compiler and CCE compiler), compilation 
for OpenMP offload is is enabled by default. It can be turned off by setting 
the Makefile variable "ENABLE_OMP_OFFLOAD=0". Further description on GenASiS
GPU offload capability can be found in [1].

GenASiS Basics has been tested with the recent versions of the following 
compilers: GCC Fortran compiler (gfortran, part of GCC), Cray Compiler 
Environment (CCE), IBM XL Fortran compiler. GenASiS Basics is written in 
full compliance with the Fortran standard to enhance portability. Earlier
releases of this code can be found in [2, 3, 4].

Sample Output

A sample output from a 2D run of RiemannProblem example problem is provided.
The following commands were used to generate the output on OLCF Summit system.

> make PURPOSE=OPTIMIZE RiemannProblem
> jsrun -n 4 -g 1 -c 7 --bind packed:7 --smpiargs="-gpu" \
  ./RiemannProblem_POWER_XL Dimensionality=2D nCells=1024,1024 \
  Verbosity=INFO_2 nWrite=1 FinishTime=0.25 \
  OutputDirectory=../RiemannProblem_2D_SampleOutput/ \
  |& tee RiemannProblem_2D_SampleOutput.STDOUT

The file "RiemannProblem_2D_SampleOutput.STDOUT" is then copied to the 
RiemannProblem_2D_SampleOutput/ directory

Christian Cardall (
Reuben Budiardja (



Object-oriented Utilitarian Functionality for Large-scale Physics Simulations







No packages published