You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Simulation hangs at the second surface output writing on supermuc NG (16 nodes, 16 ranks), with module netcdf-hdf5-all/4.7_hdf5-1.10-intel19-impi and a relatively large mesh (32million cells), with hdf5 backend. This does not occur with posix backend. It occurs only when volume output is on. Run with o3, elastic. This is an Iceland simulation of Bo with dynamic rupture.
It might be that it is a bug of hdf5, but I m still reporting for other users.
Expected behavior
posix and hdf5 give same behavior.
To Reproduce
Steps to reproduce the behavior:
Which version do you use? Provide branch and commit id.
Latest master, commit 092a02a (also tried actor branch latest).
Which build settings do you use? Which compiler version do you use?
intel (GCC with actor)
ADDRESS_SANITIZER_DEBUG OFF
ASAGI ON
CMAKE_BUILD_TYPE Release
CMAKE_INSTALL_PREFIX /usr/local
COMMTHREAD ON
COVERAGE OFF
DEVICE_ARCH none
DEVICE_BACKEND none
DYNAMIC_RUPTURE_METHOD quadrature
EQUATIONS elastic
GEMM_TOOLS_LIST LIBXSMM,PSpaMM
HDF5 ON
HDF5_C_LIBRARY_dl /usr/lib64/libdl.so
HDF5_C_LIBRARY_hdf5 /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/skylake_avx512/netcdf-hdf5-all/4.7_hdf5-1.10-intel-vd6s5so/lib/libhdf5.so
HDF5_C_LIBRARY_hdf5_hl /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/skylake_avx512/netcdf-hdf5-all/4.7_hdf5-1.10-intel-vd6s5so/lib/libhdf5_hl.so
HDF5_C_LIBRARY_m /usr/lib64/libm.so
HDF5_C_LIBRARY_pthread /usr/lib64/libpthread.so
HDF5_C_LIBRARY_sz /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/x86_64/libszip/2.1.1-gcc-eckhac3/lib/libsz.so
HDF5_C_LIBRARY_z /usr/lib64/libz.so
HOST_ARCH skx
INTEGRATE_QUANTITIES OFF
LIKWID OFF
LOG_LEVEL warning
LOG_LEVEL_MASTER info
Libxsmm_executable_PROGRAM /dss/dsshome1/0A/di73yeq4/bin/libxsmm_gemm_generator
MEMKIND OFF
MEMORY_LAYOUT auto
METIS ON
MINI_SEISSOL ON
MPI ON
NETCDF ON
NUMA_AWARE_PINNING ON
NUMA_ROOT_DIR /dss/dsshome1/lrz/sys/spack/release/21.1.1/opt/skylake_avx512/numactl/2.0.12-intel-oaf54jj
NUMBER_OF_FUSED_SIMULATIONS 1
NUMBER_OF_MECHANISMS 0
OPENMP ON
ORDER 3
PLASTICITY_METHOD nb
PRECISION double
PROXY_PYBINDING OFF
PSpaMM_PROGRAM /dss/dsshome1/0A/di73yeq4/bin/pspamm.py
SIONLIB OFF
TESTING OFF
TESTING_GENERATED OFF
USE_IMPALA_JIT_LLVM OFF
easi_DIR /hppfs/work/pr63qo/di73yeq4/myLibs/install_dir_intel/lib64/cmake/easi
On which machine does your problem occur? If on a cluster: Which modules are loaded?
NG
Then tried a new mesh by adding "", the mesh size is ~32million cells. Now simulations can continue when turning on hdf5 volume output.
But even with successful hdf5 output, they are unable to be read by paraview 5.6, 5.8, 5.8 (didn't try 5.7). While the "posix" output has no problem.
Describe the bug
Simulation hangs at the second surface output writing on supermuc NG (16 nodes, 16 ranks), with module netcdf-hdf5-all/4.7_hdf5-1.10-intel19-impi and a relatively large mesh (32million cells), with hdf5 backend. This does not occur with posix backend. It occurs only when volume output is on. Run with o3, elastic. This is an Iceland simulation of Bo with dynamic rupture.
It might be that it is a bug of hdf5, but I m still reporting for other users.
Expected behavior
posix and hdf5 give same behavior.
To Reproduce
Steps to reproduce the behavior:
Latest master, commit 092a02a (also tried actor branch latest).
intel (GCC with actor)
NG
/hppfs/work/pr83no/di73yeq4/bug_HFF_volume_output_hdf5_issue_508
Screenshots/Console output
If you suspect a problem in the numerics/physics add a screenshot of your output.
If you encounter any errors/warnings/... during execution please provide the console output.
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: