Skip to content

--MCA flag does not activate btl framework to work with specified components (TCP/IB) #5808

@abeltre1

Description

@abeltre1

Background information

  1. installed openmpi 3.1.2 after Mellanox OFED
  2. Tested with mpirun and a large number for --mca parameters

What version of Open MPI are you using? (e.g., v1.10.3, v2.1.0, git branch name and hash, etc.)

  1. V3.1.2
  2. MLNX_OFED_LINUX-4.4-2.0.7.0-rhel7.5-x86_64
  3. CentOS 7.5

Describe how Open MPI was installed (e.g., from a source/distribution tarball, from a git clone, from an operating system distribution package, etc.)

  1. source/distribution tarball

Please describe the system on which you are running

  • Operating system/version: Centos 7.5
  • Computer hardware: Intel Xion
  • Network type: IB

Details of the problem

Please describe, in detail, the problem that you are having, including the behavior you expect to see, the actual behavior that you are seeing, steps to reproduce the problem, etc. It is most helpful if you can attach a small program that a developer can use to reproduce your problem.

Note: If you include verbatim output (or a code block), please use a GitHub Markdown code block like below:

shell$  mpirun -mca mtl_mxm_np 0 --mca btl tcp -np 48 --map-by node --hostfile hostfile  ./osu_alltoallv 
# OSU MPI All-to-Allv Personalized Exchange Latency Test v3.8
# Size       Avg Latency(us)
1                     357.03
2                     357.91
4                     351.81
8                     352.57
16                    352.21
32                    350.65
64                    355.97
128                   369.26
256                   383.22
512                   398.14
1024                  425.57
2048                  485.46
4096                  680.91
8192                 1204.47
16384                2094.76
32768                3836.11
65536                8636.47
131072              17042.69
262144              31665.86
524288              63652.95
1048576            124922.32
mpirun -mca mtl_mxm_np 0 --mca btl openib -np 48 --map-by node --hostfile hostfile  ./osu_alltoallv
# OSU MPI All-to-Allv Personalized Exchange Latency Test v3.8
# Size       Avg Latency(us)
1                     351.64
2                     357.78
4                     354.88
8                     351.91
16                    353.21
32                    348.08
64                    356.01
128                   368.82
256                   383.04
512                   396.46
1024                  424.53
2048                  484.94
4096                  681.93
8192                 1211.97
16384                2104.20
32768                3852.04
65536                8563.86
131072              16901.64
262144              31549.93
524288              62248.82
1048576            124405.48

This is a latency test for which I expect the values for each individual component to yield a lower latency number for each message size.

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions