-
Notifications
You must be signed in to change notification settings - Fork 933
Closed
Labels
Milestone
Description
This test did pass in Open MPI 1.10.0 but now if failing in v2.x and master. Is there a chance we just do not care? I cannot remember the discussions we had around this test.
[rvandevaart@drossetti-ivy4 dynamic]$ mpirun --mca btl self,sm,tcp -np 2 intercomm_create
b: MPI_Intercomm_create( intra, 0, intra, MPI_COMM_NULL, 201, &inter) [rank 2]
b: MPI_Intercomm_create( intra, 0, intra, MPI_COMM_NULL, 201, &inter) [rank 3]
c: MPI_Intercomm_create( MPI_COMM_WORLD, 0, intra, 0, 201, &inter) [rank 3]
a: MPI_Intercomm_create( ab_intra, 0, ac_intra, 2, 201, &inter) (0)
a: MPI_Intercomm_create( ab_intra, 0, ac_intra, 2, 201, &inter) (0)
--------------------------------------------------------------------------
At least one pair of MPI processes are unable to reach each other for
MPI communications. This means that no Open MPI device has indicated
that it can be used to communicate between these processes. This is
an error; Open MPI requires that all MPI processes be able to reach
each other. This error can sometimes be the result of forgetting to
specify the "self" BTL.
Process 1 ([[49915,2],1]) is on host: drossetti-ivy4
Process 2 ([[49915,3],0]) is on host: drossetti-ivy4
BTLs attempted: tcp self sm
Your MPI job is now going to abort; sorry.
--------------------------------------------------------------------------
b: intercomm_create (0)
b: barrier on inter-comm - before
c: MPI_Intercomm_create( MPI_COMM_WORLD, 0, intra, 0, 201, &inter) [rank 2]
[drossetti-ivy4.nvidia.com:28877] [[49915,2],1] ORTE_ERROR_LOG: Unreachable in file ../../ompi/communicator/comm.c at line 1891
[drossetti-ivy4.nvidia.com:28877] 3: Error in ompi_get_rprocs
--------------------------------------------------------------------------
At least one pair of MPI processes are unable to reach each other for
MPI communications. This means that no Open MPI device has indicated
that it can be used to communicate between these processes. This is
an error; Open MPI requires that all MPI processes be able to reach
each other. This error can sometimes be the result of forgetting to
specify the "self" BTL.
Process 1 ([[49915,3],0]) is on host: drossetti-ivy4
Process 2 ([[49915,2],0]) is on host: drossetti-ivy4
BTLs attempted: tcp self sm
Your MPI job is now going to abort; sorry.
--------------------------------------------------------------------------
c: intercomm_create (0)
c: barrier on inter-comm - before
--------------------------------------------------------------------------
At least one pair of MPI processes are unable to reach each other for
MPI communications. This means that no Open MPI device has indicated
that it can be used to communicate between these processes. This is
an error; Open MPI requires that all MPI processes be able to reach
each other. This error can sometimes be the result of forgetting to
specify the "self" BTL.
Process 1 ([[49915,3],1]) is on host: drossetti-ivy4
Process 2 ([[49915,2],0]) is on host: drossetti-ivy4
BTLs attempted: tcp self sm
Your MPI job is now going to abort; sorry.
--------------------------------------------------------------------------
c: intercomm_create (0)
c: barrier on inter-comm - before
--------------------------------------------------------------------------
At least one pair of MPI processes are unable to reach each other for
MPI communications. This means that no Open MPI device has indicated
that it can be used to communicate between these processes. This is
an error; Open MPI requires that all MPI processes be able to reach
each other. This error can sometimes be the result of forgetting to
specify the "self" BTL.
Process 1 ([[49915,2],0]) is on host: drossetti-ivy4
Process 2 ([[49915,3],0]) is on host: drossetti-ivy4
BTLs attempted: tcp self sm
Your MPI job is now going to abort; sorry.
--------------------------------------------------------------------------
b: intercomm_create (0)
b: barrier on inter-comm - before
[drossetti-ivy4.nvidia.com:28876] [[49915,2],0] ORTE_ERROR_LOG: Unreachable in file ../../ompi/communicator/comm.c at line 1891
[drossetti-ivy4.nvidia.com:28876] 2: Error in ompi_get_rprocs
[drossetti-ivy4.nvidia.com:28882] [[49915,3],0] ORTE_ERROR_LOG: Unreachable in file ../../ompi/communicator/comm.c at line 1891
[drossetti-ivy4.nvidia.com:28882] 0: Error in ompi_get_rprocs
[drossetti-ivy4.nvidia.com:28883] [[49915,3],1] ORTE_ERROR_LOG: Unreachable in file ../../ompi/communicator/comm.c at line 1891
[drossetti-ivy4.nvidia.com:28883] 1: Error in ompi_get_rprocs
[drossetti-ivy4:28877] *** An error occurred in MPI_Barrier
[drossetti-ivy4:28877] *** reported by process [140057859784706,1]
[drossetti-ivy4:28877] *** on communicator MPI_COMM_WORLD
[drossetti-ivy4:28877] *** MPI_ERR_COMM: invalid communicator
[drossetti-ivy4:28877] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[drossetti-ivy4:28877] *** and potentially your MPI job)
[drossetti-ivy4:28882] *** An error occurred in MPI_Barrier
[drossetti-ivy4:28882] *** reported by process [140049269850115,0]
[drossetti-ivy4:28882] *** on communicator MPI_COMM_WORLD
[drossetti-ivy4:28882] *** MPI_ERR_COMM: invalid communicator
[drossetti-ivy4:28882] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[drossetti-ivy4:28882] *** and potentially your MPI job)
[drossetti-ivy4:28883] *** An error occurred in MPI_Barrier
[drossetti-ivy4:28883] *** reported by process [139697082531843,1]
[drossetti-ivy4:28883] *** on communicator MPI_COMM_WORLD
[drossetti-ivy4:28883] *** MPI_ERR_COMM: invalid communicator
[drossetti-ivy4:28883] *** MPI_ERRORS_ARE_FATAL (processes in this communicator will now abort,
[drossetti-ivy4:28883] *** and potentially your MPI job)
[rvandevaart@drossetti-ivy4 dynamic]$