PlasmaApp: A multi-architecture implicit particle-in-cell proxy
PlasmaApp is a flexible implicit charge and energy conserving implicit PIC framework. This codes aims to demonstrate the potential of using a fluid plasma model to accelerate a kinetic model through a High-Low order system coupling. The multi-granularity of this problem gives it the ability to map well to emerging heterogeneous architectures with multiple levels of parallelism. Additionally this problem maps very well to very fine parallel architectures, such as GPUs, since the vast majority of the work is encapsulated in the particle system, a trivially parallel problem. This approach also has applicability to very large scale systems, potential exascale, due to the large amount of particle work per communication.
The initial C++ implementation targets hybrid GPU + Multi-Core systems, but will preserve the flexibility to be easily implemented on other architectures.
This flexibility will be accomplished by separating the physics algorithms from the underlying architecture considerations through the use of C++ templates and class inheritance.
Building
$>gmake
$>gmake tests
Be sure to use the parallel build option -j N
where N is the number of
threads to use.
Note: Double precision is toggled in the file PlasmaData.h via the preprocessor define
DOUBLE_PRECISION
. to use single precision simply comment out this line of code.
Note 2: MPI libraries may be different on your machine. You may have to edit the makefile to use the correct one.
Make arguments
USECUDA=1
NOHANDVEC=1
Running
There are several test problems currently implemented.
- Two Stream Instability
- Ion Acoustic Shock
running the Two Stream Instability problem
$> mpirun -N $NUM_NODES -n $NUM_TASKS ./bin/TwoStream_test -np $NUM_PTCLS -nx 32 -Lx 1 -dt 0.5 -s 100
$> mpirun -N $NUM_NODES -n $NUM_TASKS ./bin/IonAcoustic_test -np $NUM_PTCLS -nx 128 -Lx 144 -dt 0.5 -s 1000
Command Line Arguments
-nx #, -ny #, -nz #
-Lx #, -Ly #, -Lz #
-x0 #, -y0 #, -z0 #
--vec_length #
-dt #
-np #
-ns #
--epsilon #
--nSpatial #
-nVel #
--plist-cpu #
--min-subcycles #
--num-cores #
--gpu-mult #
-g
--lo-all
--runid #
Questions?
Email Joshua Payne payne@lanl.gov