Skip to content

cocomans/plasma

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 

Repository files navigation

PlasmaApp: A multi-architecture implicit particle-in-cell proxy

PlasmaApp is a flexible implicit charge and energy conserving implicit PIC framework. This codes aims to demonstrate the potential of using a fluid plasma model to accelerate a kinetic model through a High-Low order system coupling. The multi-granularity of this problem gives it the ability to map well to emerging heterogeneous architectures with multiple levels of parallelism. Additionally this problem maps very well to very fine parallel architectures, such as GPUs, since the vast majority of the work is encapsulated in the particle system, a trivially parallel problem. This approach also has applicability to very large scale systems, potential exascale, due to the large amount of particle work per communication.

The initial C++ implementation targets hybrid GPU + Multi-Core systems, but will preserve the flexibility to be easily implemented on other architectures.

This flexibility will be accomplished by separating the physics algorithms from the underlying architecture considerations through the use of C++ templates and class inheritance.

Building

$>gmake
build all of the required libraries.
$>gmake tests
build all of the test routines

Be sure to use the parallel build option -j N where N is the number of threads to use.

Note: Double precision is toggled in the file PlasmaData.h via the preprocessor define DOUBLE_PRECISION. to use single precision simply comment out this line of code.

Note 2: MPI libraries may be different on your machine. You may have to edit the makefile to use the correct one.

Make arguments

USECUDA=1
Enables and builds CUDA parts of the code (Requires CUDA 5.0 or later)
NOHANDVEC=1
Disables hand vectorization, This should be used if your machine does not support AVX

Running

There are several test problems currently implemented.

  1. Two Stream Instability
  2. Ion Acoustic Shock

running the Two Stream Instability problem

$> mpirun -N $NUM_NODES -n $NUM_TASKS ./bin/TwoStream_test -np $NUM_PTCLS -nx 32 -Lx 1 -dt 0.5 -s 100
$> mpirun -N $NUM_NODES -n $NUM_TASKS ./bin/IonAcoustic_test -np $NUM_PTCLS -nx 128 -Lx 144 -dt 0.5 -s 1000

Command Line Arguments

-nx #, -ny #, -nz #
Number of cells in the x, y, and z dimensions
-Lx #, -Ly #, -Lz #
System Length in debye lengths
-x0 #, -y0 #, -z0 #
System origin
--vec_length #
Length of particle list object for cpu particle push
-dt #
time step size in electron plasma frequency
-np #
Number of particles per mpi task
-ns #
Number of particle spiecies
--epsilon #
Specify tolerance of particle picard loop
--nSpatial #
Number of spatial dimensions to use
-nVel #
Number of velocity dimensions
--plist-cpu #
Which CPU particle list optimization to use 0=default, 1=sorted
--min-subcycles #
Minimum number of subcycles to use during HO particle push
--num-cores #
number of cpu cores to use for shared memory particle push.
--gpu-mult #
Multiplier for number of particles to run on gpu vs multi-core for load balancing.
-g
Turns on plotting
--lo-all
run the lo order solver on all nodes
--runid #
Specify benchmark output file number.

Questions?

Email Joshua Payne payne@lanl.gov

About

Multi-scale plasma physics proxy applications.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages