-
Notifications
You must be signed in to change notification settings - Fork 0
/
ex3.3_8x7_jID5774166.log
609 lines (516 loc) · 28.1 KB
/
ex3.3_8x7_jID5774166.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
:-) GROMACS - gmx mdrun, 2023.3 (-:
Copyright 1991-2023 The GROMACS Authors.
GROMACS is free software; you can redistribute it and/or modify it
under the terms of the GNU Lesser General Public License
as published by the Free Software Foundation; either version 2.1
of the License, or (at your option) any later version.
Current GROMACS contributors:
Mark Abraham Andrey Alekseenko Cathrine Bergh
Christian Blau Eliane Briand Mahesh Doijade
Stefan Fleischmann Vytas Gapsys Gaurav Garg
Sergey Gorelov Gilles Gouaillardet Alan Gray
M. Eric Irrgang Farzaneh Jalalypour Joe Jordan
Christoph Junghans Prashanth Kanduri Sebastian Keller
Carsten Kutzner Justin A. Lemkul Magnus Lundborg
Pascal Merz Vedran Miletic Dmitry Morozov
Szilard Pall Roland Schulz Michael Shirts
Alexey Shvetsov Balint Soproni David van der Spoel
Philip Turner Carsten Uphoff Alessandra Villa
Sebastian Wingbermuehle Artem Zhmurov
Previous GROMACS contributors:
Emile Apol Rossen Apostolov James Barnett
Herman J.C. Berendsen Par Bjelkmar Viacheslav Bolnykh
Kevin Boyd Aldert van Buuren Carlo Camilloni
Rudi van Drunen Anton Feenstra Oliver Fleetwood
Gerrit Groenhof Bert de Groot Anca Hamuraru
Vincent Hindriksen Victor Holanda Aleksei Iupinov
Dimitrios Karkoulis Peter Kasson Sebastian Kehl
Jiri Kraus Per Larsson Viveca Lindahl
Erik Marklund Pieter Meulenhoff Teemu Murtola
Sander Pronk Alfons Sijbers Peter Tieleman
Jon Vincent Teemu Virolainen Christian Wennberg
Maarten Wolf
Coordinated by the GROMACS project leaders:
Paul Bauer, Berk Hess, and Erik Lindahl
GROMACS: gmx mdrun, version 2023.3
Executable: /appl/local/csc/soft/chem/GROMACS/2023.3-hipSYCL-GPU/bin/gmx_mpi
Data prefix: /appl/local/csc/soft/chem/GROMACS/2023.3-hipSYCL-GPU
Working dir: /pfs/lustrep4/scratch/project_462000007/rkronber/gromacs/gmx-on-lumi/exercise-3.3/stmv
Process ID: 11135
Command line:
gmx_mpi mdrun -s pme_nvt -nstlist 400 -npme 1 -nb gpu -pme gpu -bonded gpu -update gpu -g ex3.3_8x7_jID5774166 -nsteps -1 -maxh 0.017 -resethway -notunepme
GROMACS version: 2023.3
Precision: mixed
Memory model: 64 bit
MPI library: MPI
OpenMP support: enabled (GMX_OPENMP_MAX_THREADS = 128)
GPU support: SYCL (hipSYCL)
NB cluster size: 8 (cluster-pair splitting off)
SIMD instructions: AVX2_256
CPU FFT library: commercial-fftw-3.3.10-sse2-avx-avx2-avx2_128
GPU FFT library: VkFFT internal (1.2.26-b15cb0ca3e884bdb6c901a12d87aa8aadf7637d8) with HIP backend
Multi-GPU FFT: none
RDTSCP usage: enabled
TNG support: enabled
Hwloc support: disabled
Tracing support: disabled
C compiler: /appl/lumi/SW/LUMI-22.08/G/EB/rocm/5.3.3/llvm/bin/clang Clang 15.0.0
C compiler flags: -mavx2 -mfma -Wno-missing-field-initializers -O3 -DNDEBUG
C++ compiler: /appl/lumi/SW/LUMI-22.08/G/EB/rocm/5.3.3/llvm/bin/clang++ Clang 15.0.0
C++ compiler flags: -mavx2 -mfma -Wno-reserved-identifier -Wno-missing-field-initializers -Weverything -Wno-c++98-compat -Wno-c++98-compat-pedantic -Wno-source-uses-openmp -Wno-c++17-extensions -Wno-documentation-unknown-command -Wno-covered-switch-default -Wno-switch-enum -Wno-extra-semi-stmt -Wno-weak-vtables -Wno-shadow -Wno-padded -Wno-reserved-id-macro -Wno-double-promotion -Wno-exit-time-destructors -Wno-global-constructors -Wno-documentation -Wno-format-nonliteral -Wno-used-but-marked-unused -Wno-float-equal -Wno-cuda-compat -Wno-conditional-uninitialized -Wno-conversion -Wno-disabled-macro-expansion -Wno-unused-macros -Wno-unused-parameter -Wno-unused-variable -Wno-newline-eof -Wno-old-style-cast -Wno-zero-as-null-pointer-constant -Wno-unused-but-set-variable -Wno-sign-compare -Wno-unused-result -fopenmp=libomp -O3 -DNDEBUG
BLAS library: External - user-supplied
LAPACK library: External - user-supplied
hipSYCL launcher: /appl/local/csc/soft/chem/hipSYCL/0.9.4-cpeGNU-22.08/lib/cmake/hipSYCL/syclcc-launcher
hipSYCL flags: -Wno-unknown-cuda-version -Wno-unknown-attributes --hipsycl-targets="hip:gfx90a"
hipSYCL GPU flags: -ffast-math;-fgpu-inline-threshold=99999
hipSYCL targets: hip:gfx90a
hipSYCL version: hipSYCL 0.9.4-git
Running on 1 node with total 56 cores, 112 processing units, 1 compatible GPU
Hardware detected on host nid007959 (the node of MPI rank 0):
CPU info:
Vendor: AMD
Brand: AMD EPYC 7A53 64-Core Processor
Family: 25 Model: 48 Stepping: 1
Features: aes amd apic avx avx2 clfsh cmov cx8 cx16 f16c fma htt lahf misalignsse mmx msr nonstop_tsc pcid pclmuldq pdpe1gb popcnt pse rdrnd rdtscp sha sse2 sse3 sse4a sse4.1 sse4.2 ssse3 x2apic
Hardware topology: Basic
Packages, cores, and logical processors:
[indices refer to OS logical processors]
Package 0: [ 1 65] [ 2 66] [ 3 67] [ 4 68] [ 5 69] [ 6 70] [ 7 71] [ 9 73] [ 10 74] [ 11 75] [ 12 76] [ 13 77] [ 14 78] [ 15 79] [ 17 81] [ 18 82] [ 19 83] [ 20 84] [ 21 85] [ 22 86] [ 23 87] [ 25 89] [ 26 90] [ 27 91] [ 28 92] [ 29 93] [ 30 94] [ 31 95] [ 33 97] [ 34 98] [ 35 99] [ 36 100] [ 37 101] [ 38 102] [ 39 103] [ 41 105] [ 42 106] [ 43 107] [ 44 108] [ 45 109] [ 46 110] [ 47 111] [ 49 113] [ 50 114] [ 51 115] [ 52 116] [ 53 117] [ 54 118] [ 55 119] [ 57 121] [ 58 122] [ 59 123] [ 60 124] [ 61 125] [ 62 126] [ 63 127]
CPU limit set by OS: -1 Recommended max number of threads: 112
GPU info:
Number of GPUs detected: 1
#0: name: , architecture 9.0.10, vendor: AMD, device version: 1.2 hipSYCL 0.9.4-git, driver version 50322062, status: compatible
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
M. J. Abraham, T. Murtola, R. Schulz, S. Páll, J. C. Smith, B. Hess, E.
Lindahl
GROMACS: High performance molecular simulations through multi-level
parallelism from laptops to supercomputers
SoftwareX 1 (2015) pp. 19-25
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Páll, M. J. Abraham, C. Kutzner, B. Hess, E. Lindahl
Tackling Exascale Software Challenges in Molecular Dynamics Simulations with
GROMACS
In S. Markidis & E. Laure (Eds.), Solving Software Challenges for Exascale 8759 (2015) pp. 3-27
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelkmar, R. Apostolov, M. R.
Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl
GROMACS 4.5: a high-throughput and highly parallel open source molecular
simulation toolkit
Bioinformatics 29 (2013) pp. 845-54
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and C. Kutzner and D. van der Spoel and E. Lindahl
GROMACS 4: Algorithms for highly efficient, load-balanced, and scalable
molecular simulation
J. Chem. Theory Comput. 4 (2008) pp. 435-447
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
D. van der Spoel, E. Lindahl, B. Hess, G. Groenhof, A. E. Mark and H. J. C.
Berendsen
GROMACS: Fast, Flexible and Free
J. Comp. Chem. 26 (2005) pp. 1701-1719
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
E. Lindahl and B. Hess and D. van der Spoel
GROMACS 3.0: A package for molecular simulation and trajectory analysis
J. Mol. Mod. 7 (2001) pp. 306-317
-------- -------- --- Thank You --- -------- --------
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
H. J. C. Berendsen, D. van der Spoel and R. van Drunen
GROMACS: A message-passing parallel molecular dynamics implementation
Comp. Phys. Comm. 91 (1995) pp. 43-56
-------- -------- --- Thank You --- -------- --------
++++ PLEASE CITE THE DOI FOR THIS VERSION OF GROMACS ++++
https://doi.org/10.5281/zenodo.10017686
-------- -------- --- Thank You --- -------- --------
The number of OpenMP threads was set by environment variable OMP_NUM_THREADS to 7
This run has forced use of 'GPU-aware MPI'. However, GROMACS cannot determine if underlying MPI is GPU-aware. Check the GROMACS install guide for recommendations for GPU-aware support. If you observe failures at runtime, try unsetting the GMX_FORCE_GPU_AWARE_MPI environment variable.
GMX_ENABLE_DIRECT_GPU_COMM environment variable detected, enabling direct GPU communication using GPU-aware MPI.
Input Parameters:
integrator = md
tinit = 0
dt = 0.002
nsteps = -1
init-step = 0
simulation-part = 1
mts = false
comm-mode = Linear
nstcomm = 1000
bd-fric = 0
ld-seed = -1074020953
emtol = 10
emstep = 0.01
niter = 20
fcstep = 0
nstcgsteep = 1000
nbfgscorr = 10
rtpi = 0.05
nstxout = 0
nstvout = 0
nstfout = 0
nstlog = 0
nstcalcenergy = 1000
nstenergy = 0
nstxout-compressed = 0
compressed-x-precision = 1000
cutoff-scheme = Verlet
nstlist = 10
pbc = xyz
periodic-molecules = false
verlet-buffer-tolerance = 0.005
rlist = 1.2
coulombtype = PME
coulomb-modifier = Potential-shift
rcoulomb-switch = 0
rcoulomb = 1.2
epsilon-r = 1
epsilon-rf = inf
vdw-type = Cut-off
vdw-modifier = Force-switch
rvdw-switch = 1
rvdw = 1.2
DispCorr = No
table-extension = 1
fourierspacing = 0.15
fourier-nx = 160
fourier-ny = 160
fourier-nz = 160
pme-order = 4
ewald-rtol = 1e-05
ewald-rtol-lj = 0.001
lj-pme-comb-rule = Geometric
ewald-geometry = 3d
epsilon-surface = 0
ensemble-temperature-setting = constant
ensemble-temperature = 298
tcoupl = V-rescale
nsttcouple = 10
nh-chain-length = 0
print-nose-hoover-chain-variables = false
pcoupl = No
pcoupltype = Isotropic
nstpcouple = -1
tau-p = 1
compressibility (3x3):
compressibility[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compressibility[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
compressibility[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p (3x3):
ref-p[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
ref-p[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
refcoord-scaling = No
posres-com (3):
posres-com[0]= 0.00000e+00
posres-com[1]= 0.00000e+00
posres-com[2]= 0.00000e+00
posres-comB (3):
posres-comB[0]= 0.00000e+00
posres-comB[1]= 0.00000e+00
posres-comB[2]= 0.00000e+00
QMMM = false
qm-opts:
ngQM = 0
constraint-algorithm = Lincs
continuation = false
Shake-SOR = false
shake-tol = 0.0001
lincs-order = 4
lincs-iter = 1
lincs-warnangle = 30
nwall = 0
wall-type = 9-3
wall-r-linpot = -1
wall-atomtype[0] = -1
wall-atomtype[1] = -1
wall-density[0] = 0
wall-density[1] = 0
wall-ewald-zfac = 3
pull = false
awh = false
rotation = false
interactiveMD = false
disre = No
disre-weighting = Conservative
disre-mixed = false
dr-fc = 1000
dr-tau = 0
nstdisreout = 100
orire-fc = 0
orire-tau = 0
nstorireout = 100
free-energy = no
cos-acceleration = 0
deform (3x3):
deform[ 0]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 1]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
deform[ 2]={ 0.00000e+00, 0.00000e+00, 0.00000e+00}
simulated-tempering = false
swapcoords = no
userint1 = 0
userint2 = 0
userint3 = 0
userint4 = 0
userreal1 = 0
userreal2 = 0
userreal3 = 0
userreal4 = 0
applied-forces:
electric-field:
x:
E0 = 0
omega = 0
t0 = 0
sigma = 0
y:
E0 = 0
omega = 0
t0 = 0
sigma = 0
z:
E0 = 0
omega = 0
t0 = 0
sigma = 0
density-guided-simulation:
active = false
group = protein
similarity-measure = inner-product
atom-spreading-weight = unity
force-constant = 1e+09
gaussian-transform-spreading-width = 0.2
gaussian-transform-spreading-range-in-multiples-of-width = 4
reference-density-filename = reference.mrc
nst = 1
normalize-densities = true
adaptive-force-scaling = false
adaptive-force-scaling-time-constant = 4
shift-vector =
transformation-matrix =
qmmm-cp2k:
active = false
qmgroup = System
qmmethod = PBE
qmfilenames =
qmcharge = 0
qmmultiplicity = 1
grpopts:
nrdf: 2.22246e+06
ref-t: 298
tau-t: 1
annealing: No
annealing-npoints: 0
acc: 0 0 0
nfreeze: N N N
energygrp-flags[ 0]: 0
The -nsteps functionality is deprecated, and may be removed in a future version. Consider using gmx convert-tpr -nsteps or changing the appropriate .mdp file field.
Overriding nsteps with value passed on the command line: -1 steps
Changing nstlist from 10 to 400, rlist from 1.2 to 2.158
Initializing Domain Decomposition on 8 ranks
Dynamic load balancing: auto
Using update groups, nr 389067, average size 2.7 atoms, max. radius 0.139 nm
Minimum cell size due to atom displacement: 2.438 nm
Initial maximum distances in bonded interactions:
two-body bonded interactions: 0.442 nm, LJ-14, atoms 106625 106633
multi-body bonded interactions: 0.442 nm, Proper Dih., atoms 106625 106633
Minimum cell size due to bonded interactions: 0.486 nm
Disabling dynamic load balancing; unsupported with GPU communication + update.
Using 1 separate PME ranks, as requested with -npme option
Optimizing the DD grid for 7 cells with a minimum initial size of 2.438 nm
The maximum allowed number of cells is: X 8 Y 8 Z 8
Domain decomposition grid 7 x 1 x 1, separate PME ranks 1
PME domain decomposition: 1 x 1 x 1
Interleaving PP and PME ranks
This rank does only particle-particle work.
Domain decomposition rank 0, coordinates 0 0 0
The initial number of communication pulses is: X 1
The initial domain decomposition cell size is: X 3.10 nm
The maximum allowed distance for atom groups involved in interactions is:
non-bonded interactions 2.436 nm
two-body bonded interactions (-rdd) 2.436 nm
multi-body bonded interactions (-rdd) 2.436 nm
NOTE: SYCL GPU support in GROMACS, and the compilers, libraries,
and drivers that it depends on are fairly new.
Please, pay extra attention to the correctness of your results,
and update to the latest GROMACS patch version if warranted.
On host nid007959 1 GPU selected for this run.
Mapping of GPU IDs to the 8 GPU tasks in the 8 ranks on this node:
PP:0,PP:0,PP:0,PP:0,PP:0,PP:0,PP:0,PME:0
PP tasks will do (non-perturbed) short-ranged and most bonded interactions on the GPU
PP task will update and constrain coordinates on the GPU
PME tasks will do all aspects on the GPU
GPU direct communication will be used between MPI ranks.
Using 8 MPI processes
Non-default thread affinity set, disabling internal thread affinity
Using 7 OpenMP threads per MPI process
System total charge: 0.000
Will do PME sum in reciprocal space for electrostatic interactions.
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
U. Essmann, L. Perera, M. L. Berkowitz, T. Darden, H. Lee and L. G. Pedersen
A smooth particle mesh Ewald method
J. Chem. Phys. 103 (1995) pp. 8577-8592
-------- -------- --- Thank You --- -------- --------
Using a Gaussian width (1/beta) of 0.384195 nm for Ewald
Potential shift: LJ r^-12: -2.648e-01 r^-6: -5.349e-01, Ewald -8.333e-06
Initialized non-bonded Coulomb Ewald tables, spacing: 1.02e-03 size: 1176
Generated table with 1579 data points for 1-4 COUL.
Tabscale = 500 points/nm
Generated table with 1579 data points for 1-4 LJ6.
Tabscale = 500 points/nm
Generated table with 1579 data points for 1-4 LJ12.
Tabscale = 500 points/nm
Using GPU 8x8 nonbonded short-range kernels
Using a dual 8x8 pair-list setup updated with dynamic, rolling pruning:
outer list: updated every 400 steps, buffer 0.958 nm, rlist 2.158 nm
inner list: updated every 12 steps, buffer 0.002 nm, rlist 1.202 nm
At tolerance 0.005 kJ/mol/ps per atom, equivalent classical 1x1 list would be:
outer list: updated every 400 steps, buffer 1.311 nm, rlist 2.511 nm
inner list: updated every 12 steps, buffer 0.051 nm, rlist 1.251 nm
The non-bonded pair calculation algorithm tolerates a few missing pair interactions close to the cut-off. This can lead to a systematic overestimation of the pressure due to missing LJ interactions. The error in the average pressure due to missing LJ interactions is at most 2.83 bar.
The pressure error can be controlled by setting the environment variable GMX_VERLET_BUFFER_PRESSURE_TOLERANCE to the allowed error in units of bar.
Removing pbc first time
Linking all bonded interactions to atoms
Initializing LINear Constraint Solver
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
B. Hess and H. Bekker and H. J. C. Berendsen and J. G. E. M. Fraaije
LINCS: A Linear Constraint Solver for molecular simulations
J. Comp. Chem. 18 (1997) pp. 1463-1472
-------- -------- --- Thank You --- -------- --------
The number of constraints is 77851
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
S. Miyamoto and P. A. Kollman
SETTLE: An Analytical Version of the SHAKE and RATTLE Algorithms for Rigid
Water Models
J. Comp. Chem. 13 (1992) pp. 952-962
-------- -------- --- Thank You --- -------- --------
Intra-simulation communication will occur every 10 steps.
++++ PLEASE READ AND CITE THE FOLLOWING REFERENCE ++++
G. Bussi, D. Donadio and M. Parrinello
Canonical sampling through velocity rescaling
J. Chem. Phys. 126 (2007) pp. 014101
-------- -------- --- Thank You --- -------- --------
There are: 1066628 Atoms
Atom distribution over 7 domains: av 152375 stddev 702 min 151529 max 153250
Updating coordinates and applying constraints on the GPU.
Constraining the starting coordinates (step 0)
Constraining the coordinates at t0-dt (step 0)
Center of mass motion removal mode is Linear
We have the following groups for center of mass motion removal:
0: rest
RMS relative constraint deviation after constraining: 4.29e-06
Initial temperature: 13.2162 K
Started mdrun on rank 0 Fri Jan 19 11:42:40 2024
The -resethway functionality is deprecated, and may be removed in a future version.
Step Time
0 0.00000
Energies (kJ/mol)
Bond U-B Proper Dih. Improper Dih. LJ-14
1.24978e+05 3.68292e+05 3.37657e+05 1.81847e+04 1.68965e+05
Coulomb-14 LJ (SR) Coulomb (SR) Coul. recip. Potential
1.53250e+06 1.45101e+06 -1.89764e+07 5.94810e+04 -1.49153e+07
Kinetic En. Total Energy Conserved En. Temperature Pressure (bar)
2.66206e+05 -1.46491e+07 -1.46491e+07 2.88124e+01 7.55828e+02
Constr. rmsd
4.28526e-06
DD step 399 load imb.: force 1.1% pme mesh/force 0.932
step 14111: resetting all time and cycle counters
Restarted time on rank 0 Fri Jan 19 11:43:11 2024
Step 28400: Run time exceeded 0.017 hours, will terminate the run within 400 steps
Step Time
28800 57.60000
Writing checkpoint, step 28800 at Fri Jan 19 11:43:42 2024
Energies (kJ/mol)
Bond U-B Proper Dih. Improper Dih. LJ-14
1.61900e+05 4.46917e+05 3.43307e+05 2.14263e+04 1.71681e+05
Coulomb-14 LJ (SR) Coulomb (SR) Coul. recip. Potential
1.52998e+06 1.09213e+06 -1.83233e+07 6.16924e+04 -1.44942e+07
Kinetic En. Total Energy Conserved En. Temperature Pressure (bar)
2.75492e+06 -1.17393e+07 -1.46459e+07 2.98174e+02 4.96762e+02
Constr. rmsd
4.28526e-06
Energy conservation over simulation part #1 of length 57.6 ps, time 0 to 57.6 ps
Conserved energy drift: 5.26e-05 kJ/mol/ps per atom
<====== ############### ==>
<==== A V E R A G E S ====>
<== ############### ======>
Statistics over 28801 steps using 29 frames
Energies (kJ/mol)
Bond U-B Proper Dih. Improper Dih. LJ-14
1.59673e+05 4.42911e+05 3.43310e+05 2.12855e+04 1.72043e+05
Coulomb-14 LJ (SR) Coulomb (SR) Coul. recip. Potential
1.53114e+06 1.12199e+06 -1.84308e+07 6.11902e+04 -1.45772e+07
Kinetic En. Total Energy Conserved En. Temperature Pressure (bar)
2.63511e+06 -1.19421e+07 -1.46490e+07 2.85207e+02 4.83152e+02
Constr. rmsd
0.00000e+00
Total Virial (kJ/mol)
7.29345e+05 1.02817e+02 -2.04594e+03
1.02343e+02 7.31125e+05 -2.40997e+02
-2.04628e+03 -2.41335e+02 7.29707e+05
Pressure (bar)
4.85761e+02 -8.19134e-01 7.34802e+00
-8.17590e-01 4.79699e+02 4.58728e-01
7.34914e+00 4.59827e-01 4.83997e+02
M E G A - F L O P S A C C O U N T I N G
NB=Group-cutoff nonbonded kernels NxN=N-by-N cluster Verlet kernels
RF=Reaction-Field VdW=Van der Waals QSTab=quadratic-spline table
W3=SPC/TIP3p W4=TIP4p (single or pairs)
V&F=Potential and force V=Potential only F=Force only
Computing: M-Number M-Flops % Flops
-----------------------------------------------------------------------------
Pair Search distance check 14718.613760 132467.524 0.0
NxN QSTab Elec. + LJ [F] 64419044.772096 3414209372.921 99.8
NxN QSTab Elec. + LJ [V&F] 65846.895040 5333598.498 0.2
Reset In Box 39.465236 118.396 0.0
CG-CoM 39.465236 118.396 0.0
Virial 16.004145 288.075 0.0
Stop-CM 14.932792 149.328 0.0
Calc-Ekin 3133.753064 84611.333 0.0
-----------------------------------------------------------------------------
Total 3419760724.470 100.0
-----------------------------------------------------------------------------
D O M A I N D E C O M P O S I T I O N S T A T I S T I C S
av. #atoms communicated per step for force: 2 x 790718.3
R E A L C Y C L E A N D T I M E A C C O U N T I N G
On 7 MPI ranks doing PP, each using 7 OpenMP threads, and
on 1 MPI rank doing PME, using 7 OpenMP threads
Activity: Num Num Call Wall time Giga-Cycles
Ranks Threads Count (s) total sum %
--------------------------------------------------------------------------------
Domain decomp. 7 7 74 1.000 97.764 2.7
Send X to PME 7 7 14690 2.261 220.984 6.2
Neighbor search 7 7 37 0.726 70.940 2.0
Launch PP GPU ops. 7 7 58686 0.962 94.020 2.6
Comm. coord. 7 7 14653 14.230 1390.898 39.1
Force 7 7 14690 0.019 1.825 0.1
Wait + Comm. F 7 7 14690 7.359 719.295 20.2
PME GPU mesh * 1 7 14690 8.067 112.638 3.2
PME wait for PP * 23.801 332.344 9.3
Wait + Recv. PME F 7 7 14690 2.774 271.125 7.6
Wait Bonded GPU 7 7 15 0.000 0.000 0.0
Wait GPU NB nonloc. 7 7 14690 0.041 3.960 0.1
Wait GPU NB local 7 7 14690 0.002 0.161 0.0
Wait GPU state copy 7 7 3047 0.771 75.324 2.1
NB X/F buffer ops. 7 7 30 0.005 0.447 0.0
Write traj. 7 7 1 0.118 11.577 0.3
Comm. energies 7 7 1469 0.733 71.695 2.0
Rest 0.847 82.778 2.3
--------------------------------------------------------------------------------
Total 31.846 3557.478 100.0
--------------------------------------------------------------------------------
(*) Note that with separate PME ranks, the walltime column actually sums to
twice the total reported, but the cycle count total and % are correct.
--------------------------------------------------------------------------------
Breakdown of PME mesh activities
--------------------------------------------------------------------------------
Wait PME GPU gather 1 7 14690 0.033 0.459 0.0
Launch PME GPU ops. 1 7 220350 0.704 9.835 0.3
Wait PME Recv. PP X 1 7 102830 7.311 102.090 2.9
--------------------------------------------------------------------------------
Core t (s) Wall t (s) (%)
Time: 1782.052 31.846 5595.8
(ns/day) (hour/ns)
Performance: 79.709 0.301
Finished mdrun on rank 0 Fri Jan 19 11:43:42 2024