You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After adding the machines and system/decomposepardict files, the test folder looks like this
$ ls
0 Allclean constant machines system test.sh
There is no need for passwords between nodes, and SSH login and hosts files have been configured correctly.
I can use any machine for parallel computing, and it can run normally. When I need more than one node to compute, I will report an error. The icofoam executable file of other nodes cannot be found, but this file is real and can run normally.
Maybe it's a variable setting error in openmpi, but I don't know where to modify it to use cluster for parallel computing.
Here is the output log, I hope someone can help me.
$ cat test.sh
blockMesh && decomposePar && mpirun --hostfile machines -np 8 icoFoam -parallel > log
$ sh test.sh
/*---------------------------------------------------------------------------*\
========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration | Website: https://openfoam.org
\\ / A nd | Version: 8
\\/ M anipulation |
\*---------------------------------------------------------------------------*/
Build : 8-3d62498be310
Exec : blockMesh
Date : May 17 2021
Time : 11:50:01
Host : "dyfluid"
PID : 191394
I/O : uncollated
Case : /home/dyfluid/test/test2/cavity
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster (fileModificationSkew 10)
allowSystemOperations : Allowing user-supplied system call operations
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time
Creating block mesh from
"system/blockMeshDict"
Creating block edges
No non-planar block faces defined
Creating topology blocks
Creating topology patches
Creating block mesh topology
Check topology
Basic statistics
Number of internal faces : 0
Number of boundary faces : 6
Number of defined boundary faces : 6
Number of undefined boundary faces : 0
Checking patch -> block consistency
Creating block offsets
Creating merge list .
Creating polyMesh from blockMesh
Creating patches
Creating cells
Creating points with scale 0.1
Block 0 cell size :
i : 0.005 .. 0.005
j : 0.005 .. 0.005
k : 0.01
Writing polyMesh
----------------
Mesh Information
----------------
boundingBox: (0 0 0) (0.1 0.1 0.01)
nPoints: 882
nCells: 400
nFaces: 1640
nInternalFaces: 760
----------------
Patches
----------------
patch 0 (start: 760 size: 20) name: movingWall
patch 1 (start: 780 size: 60) name: fixedWalls
patch 2 (start: 840 size: 800) name: frontAndBack
End
/*---------------------------------------------------------------------------*\
========= |
\\ / F ield | OpenFOAM: The Open Source CFD Toolbox
\\ / O peration | Website: https://openfoam.org
\\ / A nd | Version: 8
\\/ M anipulation |
\*---------------------------------------------------------------------------*/
Build : 8-3d62498be310
Exec : decomposePar
Date : May 17 2021
Time : 11:50:01
Host : "dyfluid"
PID : 191395
I/O : uncollated
Case : /home/dyfluid/test/test2/cavity
nProcs : 1
sigFpe : Enabling floating point exception trapping (FOAM_SIGFPE).
fileModificationChecking : Monitoring run-time modified files using timeStampMaster (fileModificationSkew 10)
allowSystemOperations : Allowing user-supplied system call operations
// * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * //
Create time
Decomposing mesh region0
Create mesh
Calculating distribution of cells
Selecting decompositionMethod scotch
Finished decomposition in 0 s
Calculating original mesh data
Distributing cells to processors
Distributing faces to processors
Distributing points to processors
Constructing processor meshes
Processor 0
Number of cells = 50
Number of faces shared with processor 1 = 10
Number of faces shared with processor 3 = 5
Number of faces shared with processor 6 = 5
Number of faces shared with processor 7 = 5
Number of processor patches = 4
Number of processor faces = 25
Number of boundary faces = 105
Processor 1
Number of cells = 50
Number of faces shared with processor 0 = 10
Number of faces shared with processor 2 = 5
Number of processor patches = 2
Number of processor faces = 15
Number of boundary faces = 115
Processor 2
Number of cells = 50
Number of faces shared with processor 1 = 5
Number of faces shared with processor 3 = 10
Number of processor patches = 2
Number of processor faces = 15
Number of boundary faces = 115
Processor 3
Number of cells = 50
Number of faces shared with processor 0 = 5
Number of faces shared with processor 2 = 10
Number of faces shared with processor 4 = 10
Number of processor patches = 3
Number of processor faces = 25
Number of boundary faces = 105
Processor 4
Number of cells = 50
Number of faces shared with processor 3 = 10
Number of faces shared with processor 5 = 10
Number of faces shared with processor 7 = 5
Number of processor patches = 3
Number of processor faces = 25
Number of boundary faces = 105
Processor 5
Number of cells = 50
Number of faces shared with processor 4 = 10
Number of faces shared with processor 7 = 5
Number of processor patches = 2
Number of processor faces = 15
Number of boundary faces = 115
Processor 6
Number of cells = 50
Number of faces shared with processor 0 = 5
Number of faces shared with processor 7 = 10
Number of processor patches = 2
Number of processor faces = 15
Number of boundary faces = 115
Processor 7
Number of cells = 50
Number of faces shared with processor 0 = 5
Number of faces shared with processor 4 = 5
Number of faces shared with processor 5 = 5
Number of faces shared with processor 6 = 10
Number of processor patches = 4
Number of processor faces = 25
Number of boundary faces = 105
Number of processor faces = 80
Max number of cells = 50 (0% above average 50)
Max number of processor patches = 4 (45.4545% above average 2.75)
Max number of faces between processors = 25 (25% above average 20)
Time = 0
Processor 0: field transfer
Processor 1: field transfer
Processor 2: field transfer
Processor 3: field transfer
Processor 4: field transfer
Processor 5: field transfer
Processor 6: field transfer
Processor 7: field transfer
End
--------------------------------------------------------------------------
mpirun was unable to find the specified executable file, and therefore
did not launch the job. This error was first reported for process
rank 4; it may have occurred for other processes as well.
NOTE: A common cause for this error is misspelling a mpirun command
line parameter option (remember that mpirun interprets the first
unrecognized command line token as the executable).
Node: node2
Executable: /home/dyfluid/OpenFOAM/OpenFOAM-8/platforms/linux64GccDPInt32Opt/bin/icoFoam
--------------------------------------------------------------------------
The text was updated successfully, but these errors were encountered:
Version
I have copied a cavity file here to test cluster parallel computing
OpenFOAM/OpenFOAM-8/tutorials/incompressible/icoFoam/cavity/cavity
After adding the machines and system/decomposepardict files, the test folder looks like this
There is no need for passwords between nodes, and SSH login and hosts files have been configured correctly.
I can use any machine for parallel computing, and it can run normally. When I need more than one node to compute, I will report an error. The icofoam executable file of other nodes cannot be found, but this file is real and can run normally.
Maybe it's a variable setting error in openmpi, but I don't know where to modify it to use cluster for parallel computing.
Here is the output log, I hope someone can help me.
The text was updated successfully, but these errors were encountered: