You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
in the case of MD simulation with MPI. LAMMPS didn't proceed after this stage.
mpirun -np 8 lmp -sf omp -pk omp 4 -in in.lammps
run 10
No /omp style for force computation currently active
While it works well in the case of mpirun -np 4 lmp -sf omp -pk omp 8 -in in.lammps like below.
I am wondering if there is a specific limit in MPI processor grid size. and sometime MD simluation ends with error below
As far as I know, pair_allegro does not support OpenMP threading via the OPENMP package. You should instead set explicitly the OMP_NUM_THREADS environment variable to use omp parallelization. Alternatively, you may use the kokkos package with omp threads, assuming lammps has been compiled with -DKokkos_ENABLE_OPENMP=yes but not sure that there is a real interest.
It's possible that this is related to issues we've seen when one of the tasks doesn't have any atoms and a resulting crash; with fewer MPI ranks they may never encounter a situation where a task has no atoms.
in the case of MD simulation with MPI. LAMMPS didn't proceed after this stage.
mpirun -np 8 lmp -sf omp -pk omp 4 -in in.lammps
While it works well in the case of mpirun -np 4 lmp -sf omp -pk omp 8 -in in.lammps like below.
I am wondering if there is a specific limit in MPI processor grid size. and sometime MD simluation ends with error below
The text was updated successfully, but these errors were encountered: