Skip to content

zhang416/dafmpb

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

94 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DASHMM Accelerated Adaptive Fast Multipole Poisson-Boltzmann Solver

The DASHMM Accelerated Adaptive Fast Multipole Poisson-Boltzmann (DAFMPB) package computes the numerical solution of the linearlized Poisson-Boltzmann equation that describes electrostatic interactions of molecular systems in ionic solutions.

The linearized Poisson-Boltzmann equation is reformulated as a boundary integral equation and is subsequently discretized using the node-patch scheme. The resulting linear system is solved using GMRES. Within each iteration, the matrix-vector multiplication is accelerated using the DASHMM library.

Installation

DAFMPB depends on two external libraries: DASHMM and HPX-5. DASHMM leverages the global address space of the HPX-5 runtime system to provide a unified evaluation of the multipole methods on both shared and distributed memory computers. This enables the latest version of AFMPB to operate on distributed memory computers while at the same time maintaining backward compatibility on shared memory computers.

Version 4.1.0 of HPX-5 is available from the contrib directory. DASHMM is automatically downloaded by AFMPB when the application is built.

Users must install HPX-5 on their systems before installing the DAFMPB solver. For users who use DAFMPB on shared memory computers only, HPX-5 can be built in the following steps

> cd /path/to/hpx
> ./configure --prefix=/path/to/install
> make
> make install

For users who use DAFMPB on distributed memory computers, HPX-5 currently specifies two network interfaces to choose from:

  1. the ISend/IRecv interface with the MPI transport
  2. the Put-with-completion (PWC) interface with the Photon transport.

HPX-5 can be built with either transport.

To configure HPX-5 with MPI network, one adds --enable-mpi to the configure line. The configuration will search for the appropriate way to include and link to MPI

  1. HPX-5 will try and see if mpi.h and libmpi.so are available with no additional flags.
  2. HPX-5 will test for an mpi.h and -lmpi in the current C_INCLUDE_PATH and {LD}_LIBRARY_PATH.
  3. HPX-5 will look for an ompi pkg-config package.

To configure HPX-5 with the Photon network, one adds --enable-photon to the configure line. HPX-5 does not provide its own distributed job launcher, so it is necesary to also use either the --enable-mpi or --enable-pmi option in order to build support for mpirun or aprun bootstrapping. Note that if you are building with the Photon network, the libraries for the given network interconnect you are targeting need to be present on the build system. The two supported interconnects are InfiniBand (libverbs and librdmacm) and Cray's GEMINI and ARIES via uGNI (libugni). On Cray machines you need to include PHOTON_CARGS="--enable-ugni" to the configure line so that Photon builds with uGNI support. Finally, the --enable-hugetlbfs option causes HPX-5 heap to be mapped with huge pages, which is necessary for larger heaps on some Cray Gemini machines.

Once HPX-5 is installed, the DAFMPB package can be built in the following steps:

> mkdir dafmpb-build
> cd dafmpb-build
> cmake ../dafmpb 
> make 

This put the executable dafmpb under dafmpb/example directory.

Example

The minimum input to dafmpb is the PQR file.

DAFMPB can read meshes generated from MSMS or TMSMesh. If no input mesh is provided, DAFMPB will invoke the built-in surface meshing routine.

  • Example 1, use built-in mesh routine
> ./dafmpb --pqr-file=GLY.pqr
  • Example 2, use mesh generated by MSMS
> ./dafmpb --pqr-file=GLY.pqr --mesh-format=1 --mesh-file=GLY.pqr-mesh.data-d20-r0.5 
  • Example 3, use mesh generated by TMSmesh
> ./dafmpb --pqr-file=fas2.pqr --mesh-format=2 --mesh-file=fas2.off 

A list of command line options to the program can be found by issuing

> ./dafmpb --help

To launch dafmpb on a cluster with Slurm workload manager, a job script looks like this

#! /bin/bash -l
#SBATCH -p queue
#SBATCH -N 2
#SBATCH -t 00:10:00

srun -n 2 -c 48 ./dafmpb --pqr-file=fas2.pqr.ext --mesh-format=2 --mesh-file=fas2.off --hpx-threads=24

The -c option equals to the number of cores Slurm sees on each compute node and the --hpx-threads option equals the number of physical cores available on the compute node.

If the above cluster were using the PBS workload manager, the script looks like this

#! /bin/bash -l
#PBS -l nodes=2:ppn=48

aprun -n 2 -d 48 ./dafmpb ...

Finally, the --potential-file option allows user to specify an output stream to hold the results of the potentials. The output can be then visualized using VCMM.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published