Skip to content

Commit

Permalink
Merge pull request #25 from feltor-dev/develop
Browse files Browse the repository at this point in the history
Multigrid solver, major ds, grid and src files, docu update
  • Loading branch information
mrheld committed Feb 6, 2018
2 parents 4f63ddf + 8335f6a commit 48e4698
Show file tree
Hide file tree
Showing 608 changed files with 35,007 additions and 45,777 deletions.
Binary file added 3dpic.jpg
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
355 changes: 355 additions & 0 deletions README.adoc
@@ -0,0 +1,355 @@
= Welcome to the FELTOR project!
:source-highlighter: pygments
:toc: macro

image::3dpic.jpg[3dsimulation]

FELTOR (Full-F ELectromagnetic code in TORoidal geometry) is both a
numerical library and a scientific software package built on top of it.

Its main physical target are plasma edge and scrape-off layer
(gyro-)fluid simulations. The numerical methods centre around
discontinuous Galerkin methods on structured grids. Our core level
functions are parallelized for a variety of hardware from multi-core cpu
to hybrid MPI{plus}GPU, which makes the library incredibly fast.

https://zenodo.org/badge/latestdoi/14143578[image:https://zenodo.org/badge/14143578.svg[DOI]]
link:LICENSE[image:https://img.shields.io/badge/License-MIT-yellow.svg[License:
MIT]]

toc::[]

== 1. Quick start guide

Go ahead and clone our library into any folder you like

[source,sh]
----
git clone https://www.github.com/feltor-dev/feltor
----

You also need to clone https://github.com/thrust/thrust[thrust] and
https://github.com/cusplibrary/cusplibrary[cusp] distributed under the
Apache-2.0 license. So again in a folder of your choice

[source,sh]
----
git clone https://www.github.com/thrust/thrust
git clone https://www.github.com/cusplibrary/cusplibrary
----

____
Our code only depends on external libraries that are themselves openly
available. We note here that we do not distribute copies of these
libraries.
____

=== Using FELTOR's code projects

In order to compile one of the many codes inside FELTOR you need to tell
the feltor configuration where the external libraries are located on
your computer. The default way to do this is to go in your `HOME`
directory, make an include directory and link the paths in this
directory

[source,sh]
----
cd ~
mkdir include
cd include
ln -s path/to/thrust/thrust thrust
ln -s path/to/cusplibrary/cusp cusp
----

____
If you do not like this, you can also create your own config file as
discribed link:config/README.md[here].
____

Now let us compile the first benchmark program.

[source,sh]
----
cd path/to/feltor/inc/dg
make blas_b device=omp #(for an OpenMP version)
#or
make blas_b
device=gpu #(if you have a gpu and nvcc )
----

____
The minimum requirement
to compile and run an application is a working C{plus}{plus} compiler (g{plus}{plus} per
default) and a CPU. To simplify the compilation process we use the GNU
Make utility, a standard build automation tool that automatically builds
the executable program. We don't use new C{plus}{plus}-11 standard features to
avoid complications since some clusters are a bit behind on up-to-date
compilers. The OpenMP standard is natively supported by most recent
C{plus}{plus} compilers.
Our GPU backend uses the
https://developer.nvidia.com/cuda-zone[Nvidia-CUDA] programming
environment and in order to compile and run a program for a GPU a user
needs the nvcc CUDA compiler (available free of charge) and a NVidia
GPU. However, we explicitly note here that due to the modular design of
our software a user does not have to possess a GPU nor the nvcc
compiler. The CPU version of the backend is equally valid and provides
the same functionality.
____

Run the code with

[source,sh]
----
./blas_b
----

and when prompted for input vector sizes type for example `3 100 100 10`
which makes a grid with 3 polynomial coefficients, 100 cells in x, 100
cells in y and 10 in z. If you compiled for OpenMP, you can set the
number of threads with e.g. `export OMP_NUM_THREADS=4`.
____
This is a
benchmark program to benchmark various elemental functions the library
is built on. Go ahead and vary the input parameters and see how your
hardware performs. You can compile and run any other program that ends
in `_t.cu` (test programs) or `_b.cu` (benchmark programs) in
`feltor/inc/dg` in this way.
____

Now, let us test the mpi setup
____
You can of course skip this if you
don't have mpi installed on your computer. If you intend to use the
MPI backend, an implementation library of the mpi standard is required.
Per default `mpic++` is used for compilation.
____

[source,sh]
----
cd path/to/feltor/inc/dg
make blas_mpib device=omp # (for MPI+OpenMP)
# or
make blas_mpib device=gpu # (for MPI+GPU)
----

Run the code with `$ mpirun -n '# of procs' ./blas_mpib` then tell how
many process you want to use in the x-, y- and z- direction, for
example: `2 2 1` (i.e. 2 procs in x, 2 procs in y and 1 in z; total
number of procs is 4) when prompted for input vector sizes type for
example `3 100 100 10` (number of cells divided by number of procs must
be an integer number). If you compiled for MPI{plus}OpenMP, you can set the
number of OpenMP threads with e.g. `export OMP_NUM_THREADS=2`.

=== Running a FELTOR simulation

Now, we want to compile and run a simulation program. First, we have to
download and install some libraries for I/O-operations.

For data output we use the
http://www.unidata.ucar.edu/software/netcdf/[NetCDF] library under an
MIT - like license. The underlying https://www.hdfgroup.org/HDF5/[HDF5]
library also uses a very permissive license. Note that for the mpi
versions of applications you need to build hdf5 and netcdf with the
--enable-parallel flag. Do NOT use the pnetcdf library, which uses the
classic netcdf file format. Our JSON input files are parsed by
https://www.github.com/open-source-parsers/jsoncpp[JsonCpp] distributed
under the MIT license (use the 0.y.x branch to avoid C{plus}{plus}-11 support).
____
Some desktop applications in FELTOR use the
https://github.com/mwiesenberger/draw[draw library] (developed by us
also under MIT), which depends on OpenGL (s.a.
http://en.wikibooks.org/wiki/OpenGL_Programming[installation guide]) and
http://www.glfw.org[glfw], an OpenGL development library under a
BSD-like license. You don't need these when you are on a cluster.
____

As in Step 3 you need to create links to the jsoncpp library include
path (and optionally the draw library) in your include folder or provide
the paths in your config file. We are ready to compile now

[source,sh]
----
cd path/to/feltor/src/toefl # or any other project in the src folder
make toeflR device=gpu # (compile on gpu or omp)
./toeflR <inputfile.json> # (behold a live simulation with glfw output on screen)
# or
make toefl_hpc device=gpu # (compile on gpu or omp)
./toefl_hpc <inputfile.json> <outputfile.nc> # (a single node simulation with output stored in a file)
# or
make toefl_mpi device=omp # (compile on gpu or omp)
export OMP_NUM_THREADS=2 # (set OpenMP thread number to 1 for pure MPI)
echo 2 2 | mpirun -n 4 ./toefl_mpi <inputfile.json> <outputfile.nc>
# (a multi node simulation with now in total 8 threads with output stored in a file)
# The mpi program will wait for you to type the number of processes in x and y direction before
# running. That is why the echo is there.
----

A default input file is located in `path/to/feltor/src/toefl/input`. All
three programs solve the same equations. The technical documentation on
what equations are discretized, input/output parameters, etc. can be
generated as a pdf with `make doc` in the `path/to/feltor/src/toefl`
directory.

=== Using FELTOR as a library

It is possible to use FELTOR as a library in your own code project. Note
that the library is **header-only**, which means that you just have to
include the relevant header(s) and you're good to go. For example in the
following program we compute the square L2 norm of a
function:

.test.cpp
[source,c++]
----
#include <iostream>
//include the basic dg-library
#include "dg/algorithm.h"
//optional: include the geometries expansion
#include "geometries/geometries.h"
double function(double x, double y){return exp(x)*exp(y);}
int main()
{
//create a 2d discretization of [0,2]x[0,2] with 3 polynomial coefficients
dg::CartesianGrid2d g2d( 0, 2, 0, 2, 3, 20, 20);
//discretize a function on this grid
const dg::DVec x = dg::evaluate( function, g2d);
//create the volume element
const dg::DVec vol2d = dg::create::volume( g2d);
//compute the square L2 norm on the device
double norm = dg::blas2::dot( x, vol2d, x);
// norm is now: (exp(4)-exp(0))^2/4
std::cout << norm <<std::endl;
return 0;
}
----

To compile and run this code for a GPU use

[source,sh]
----
nvcc -x cu -Ipath/to/feltor/inc -Ipath/to/thrust/thrust -Ipath/to/cusplibrary/cusp test.cpp -o test
./test
----

Or if you want to use OpenMP and gcc instead of CUDA for the device
functions you can also use

[source,sh]
----
g++ -fopenmp -DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_OMP -Ipath/to/feltor/inc -Ipath/to/thrust/thrust -Ipath/to/cusplibrary/cusp test.cpp -o test
export OMP_NUM_THREADS=4
./test
----

If you want to use mpi, just include the MPI header before any other
FELTOR header and use our convenient typedefs like so:

.test_mpi.cpp
[source,c++]
----
#include <iostream>
//activate MPI in FELTOR
#include "mpi.h"
#include "dg/algorithm.h"
double function(double x, double y){return exp(x)*exp(y);}
int main(int argc, char* argv[])
{
//init MPI and create a 2d Cartesian Communicator assuming 4 MPI threads
MPI_Init( &argc, &argv);
int periods[2] = {true, true}, np[2] = {2,2};
MPI_Comm comm;
MPI_Cart_create( MPI_COMM_WORLD, 2, np, periods, true, &comm);
//create a 2d discretization of [0,2]x[0,2] with 3 polynomial coefficients
dg::CartesianMPIGrid2d g2d( 0, 2, 0, 2, 3, 20, 20, comm);
//discretize a function on this grid
const dg::MDVec x = dg::evaluate( function, g2d);
//create the volume element
const dg::MDVec vol2d = dg::create::volume( g2d);
//compute the square L2 norm
double norm = dg::blas2::dot( x, vol2d, x);
//on every thread norm is now: (exp(4)-exp(0))^2/4
//be a good MPI citizen and clean up
MPI_Finalize();
return 0;
}
----

Compile e.g. for a hybrid MPI {plus} OpenMP hardware platform with

[source,sh]
----
mpic++ -fopenmp -DTHRUST_DEVICE_SYSTEM=THRUST_DEVICE_SYSTEM_OMP -Ipath/to/feltor/inc -Ipath/to/thrust/thrust -Ipath/to/cusplibrary/cusp test_mpi.cpp -o test_mpi
export OMP_NUM_THREADS=2
mpirun -n 4 ./test_mpi
----

Note the striking similarity to the previous program. Especially the
line calling the dot function did not change at all. The compiler
chooses the correct implementation for you! This is a first example of a
__container free numerical algorithm__.

== 2. Further reading

Please check out our https://github.com/feltor-dev/feltor/wiki[wiki
pages] for some general information, user oriented documentation and
Troubleshooting. Moreover, we maintain tex files in every src folder for
technical documentation, which can be compiled using pdflatex with
`make doc` in the respective src folder. The
http://feltor-dev.github.io/doc/dg/html/modules.html[developer
oriented documentation] of the dG library was generated with
http://www.doxygen.org[Doxygen] and LateX. You can generate a local
version including informative pdf writeups on implemented numerical
methods directly from source code. This depends on the `doxygen`,
`libjs-mathjax` and `graphviz` packages and LateX. Type `make doc` in
the folder `path/to/feltor/doc` and open `index.html` (a symbolic link
to `dg/html/modules.html`) with your favorite browser.
Finally, also note the documentations of https://thrust.github.io/doc/modules.html[thrust]
and https://cusplibrary.github.io/[cusp].

== 3. Acknowledgements

FELTOR is developed by Matthias Wiesenberger and Markus Held and
receives contributions from an increasing number of people. We
gratefully acknowledge fruitful discussions and code contribution from

- Ralph Kube
- Eduard Reiter
- Lukas Einkemmer
- Jakob Gath

We further acknowledge support on the Knights landing architecture from
the High Level Support Team from

- Albert Gutiérrez
- Xavier Saez

and from Intel Barcelona

- Harald Servat

== 4. Terms of use

FELTOR is https://www.force11.org/fairprinciples[fair] software and
licensed under the very permissive link:LICENSE[MIT license]. The MIT
License grants you great freedom in what you do with the code as long as
you name us (Matthias Wiesenberger and Markus Held) as creators, in
particular in publications that use FELTOR to produce results. In this
case we suggest to take a snapshot of the used code and create and cite
a DOI via e.g. http://www.zenodo.org[Zenodo] or to cite one of the
existing DOIs if you did not alter the contained code in any way. We are
happy if you cite our papers, but you don't have to just because you
used our code and we certainly do not demand to be coauthors when we do
not contribute directly to your results.

== 5. Official releases

Our latest code release has a shiny DOI badge from zenodo

https://zenodo.org/badge/latestdoi/14143578[image:https://zenodo.org/badge/14143578.svg[DOI]]

which makes us officially citable.
29 changes: 0 additions & 29 deletions README.md

This file was deleted.

0 comments on commit 48e4698

Please sign in to comment.