This suite contains a number of kernel operations, called Parallel Research Kernels, plus a simple build system intended for a Linux-compatible environment. Most of the code relies on open standard programming models and thus can be executed on many computing systems.
These programs should not be used as benchmarks. They are operations to explore features of a hardware platform, but they do not define fixed problems that can be used to rank systems. Furthermore they have not been optimimzed for the features of any particular system.
To build the codes the user needs to make certain changes by editing text
files. Assuming the source tree is untarred in directory
following file needs to be copied to
$PRK/common/make.defs and edited.
$PRK/common/make.defs.in -- This file specifies the names of the C
CC), and of the MPI (Message Passing Interface) compiler
or compile script. If MPI is not going to be used, the user can ignore
the value of
MPICC. The compilers should already be in your path. That
is, if you define
CC=icc, then typing
which icc should show a
valid path where that compiler is installed.
Special instructions for building and running codes using Charm++, Grappa,
OpenSHMEM, or Fine-Grain MPI are in
We provide examples of working examples for a number of programming environments.
Some of these are tested more than others.
If you are looking for the simplest option, try
||Cray compilers on Cray XC systems.|
||GCC with the CUDA compiler (only used in C++/CUDA implementation).|
||GCC compiler tool chain, which supports essentially all implementations.|
||FreeBSD (rarely tested).|
||IBM Blue Gene/Q compiler toolchain (deprecated).|
||IBM compilers for POWER9 and NVIDIA Volta platforms.|
||Intel compiler tool chain, which supports most implementations.|
||LLVM compiler tool chain, which supports most implementations.|
||GCC compiler toolchain with MUSL as the C standard library, which is required to use C11 threads.|
||Intel oneAPI (https://software.intel.com/oneapi/hpc-kit).|
||PGI compiler toolchain (infrequently tested).|
||HIP compiler toolchain (infrequently tested).|
Some of the C++ implementations require you to install Boost, RAJA, KOKKOS, Parallel STL, respectively,
and then modify
make.defs appropriately. Please see the documentation in the
You can refer to the
travis subdirectory for install scripts that can be readily modified
to install any of the dependencies in your local environment.
Supported Programming Models
- MPI with one-sided communications (MPIRMA)
- MPI with direct use of shared memory inside coherency domains (MPISHM)
- MPI with OpenMP inside coherency domains (MPIOPENMP) These extensions are not yet complete.
More recently, we have implemented many single-node programming models in modern languages.
y = yes
i = in-progress, incomplete, incorrect, or incredibly slow
f = see footnotes
|C++11 threads, async||y|
By intrinsics, we mean the language built-in features, such as colon notation or the
DO CONCURRENT in a few places.
x = externally supported (in the Chapel repo)
|Python 3 w/ Numpy||y||y||y||y||y||y|
|Python 3 w/ mpi4py||y||y||y|
make help in the top directory for the latest information.
To build all available kernels of a certain version, type in the root directory:
||builds all kernels.|
||builds all serial kernels.|
||builds all OpenMP kernels.|
||builds all conventional two-sided MPI kernels.|
||builds all MPI kernels.|
||builds all Fine-Grain MPI kernels.|
||builds all Adaptive MPI kernels.|
||builds all hybrid MPI+OpenMP kernels.|
||builds all MPI-3 kernels with one-sided communications.|
||builds all kernels with MPI-3 shared memory.|
||builds all OpenSHMEM kernels.|
||builds all Unified Parallel C (UPC) kernels.|
||builds all Charm++ kernels.|
||builds all Grappa kernels.|
||builds all Fortran kernels.|
||builds all C99/C11 kernels.|
||builds all C++11 kernels.|
The global make process uses a single set of optimization flags for all
kernels. For more control, the user should consider individual makes
(see below), carefully choosing the right parameters in each Makefile.
If a a single set of optimization flags different from the default is
desired, the command line can be adjusted:
make all<version> default_opt_flags=<list of optimization flags>
The global make process uses some defaults for the Branch kernel
(see Makefile in that directory). These can be overridden by adjusting
the command line:
make all<version> matrix_rank=<n> number_of_functions=<m>
Note that no new values for
be used unless a
make veryclean has been issued.
Descend into the desired sub-tree and cd to the kernel(s) of interest. Each kernel has its own Makefile. There are a number of parameters that determine the behavior of the kernel that need to be known at compile time. These are explained succinctly in the Makefile itself. Edit the Makefile to activate certain parameters, and/or to set their values.
make without parameters in each leaf directory will prompt
the user for the correct parameter syntax. Once the code has been
built, typing the name of the executable without any parameters will
prompt the user for the correct parameter syntax.
Running test suite
After the desired kernels have been built, they can be tested by executing scripts in the 'scripts' subdirectory from the root of the kernels package. Currently two types of run scripts are supported. scripts/small: tests only very small examples that should complete in just a few seconds. This merely tests functionality of kernels and installed runtimes scripts/wide: tests examples that will take up most memory on a single node with 64 GB of memory.
Only a few parameters can be changed globally; for rigorous testing, the user should run each kernel individually, carefully choosing the right parameters. This may involve editing the individual Makefiles and rerunning the kernels.
Example build and runs
make all default_opt_flags="-O2" "matrix_rank=7" "number_of_functions=200" ./scripts/small/runopenmp ./scripts/small/runmpi1 ./scripts/wide/runserial ./scripts/small/runcharm++ ./scripts/wide/runmpiopenmp
To exercise all kernels, type
We have a rather massive test matrix running in Travis CI. Unfortunately, the Travis CI environment may vary with time and occasionally differs from what we are running locally, which makes debugging tricky. If the status of the project is not passing, please inspect the details, because this may not be an indication of an issue with our project, but rather something in Travis CI.
See COPYING for licensing information.
Note on stream
Note that while our
nstream operations are based on the well
known STREAM benchmark by John D. McCalpin, we modified the source
code and do not follow the run-rules associated with this benchmark.
Hence, according to the rules defined in the STREAM license (see
clause 3b), you must never report the results of our nstream
operations as official "STREAM Benchmark" results. The results must
be clearly labled whenever they are published. Examples of proper
"tuned STREAM benchmark results" "based on a variant of the STREAM benchmark code"
Other comparable, clear, and reasonable labelling is acceptable.