Skip to content

Conversation

@jsquyres
Copy link
Member

This is an addendum to Nathan's original commit (d2f5fca).

Output now looks like this:

Open MPI configuration:
-----------------------
Version: 3.0.0a1
Build Open Platform Abstration project: yes
Build Open Runtime project: yes
Build Open MPI project: yes
Build Open SHMEM project: no
MPI C++ bindings (deprecated): no
MPI Fortran bindings: mpif.h, use mpi, use mpi_f08
MPI Java bindings (experimental): yes
Debug build: yes
Platform file: (none)

Transports
-----------------------
Cray uGNI (Gemini/Aries): no
Intel Omnipath (PSM2): no
Intel SCIF: no
Mellanox MXM: no
Open UCX: no
OpenFabrics Libfabric: yes
OpenFabrics Verbs: yes
Portals v4: no
QLogic Infinipath (PSM): no
Shared memory/Linux CMA: no
Shared Memory/Linux KNEM: no
Shared Memory/XPMEM: no
TCP: yes

Resource Managers
-----------------------
Cray Alps: no
Grid Engine: no
LSF: no
Slurm: yes
Torque: no

*****************************************************************************
 THIS IS A DEBUG BUILD!  DO NOT USE THIS BUILD FOR PERFORMANCE MEASUREMENTS!
*****************************************************************************

@hjelmn Opinions?

@jsquyres jsquyres added this to the v2.1.0 milestone Mar 10, 2016
@jsquyres
Copy link
Member Author

@sjeaugey Do you want to add something in here about CUDA?

@hjelmn
Copy link
Member

hjelmn commented Mar 10, 2016

Looks good to me.

@bosilca
Copy link
Member

bosilca commented Mar 10, 2016

You might want to add
OMPI_SUMMARY_ADD([[Transports]],[[CUDA support for IB and shared memory]],[$1], $opal_check_cuda_happy])

in opal_check_cuda.m4 before the AM_CONDITIONAL([OPAL_cuda_support],...

@sjeaugey
Copy link
Member

CUDA could be in a separate "Others" section because it applies to many parts of the code. But that would make a bigger summary and it is already big, so I think I'm OK with George's suggestion.

Other than the CUDA part, I would have a few comments on the original patch :

  • I think users are not interested in knowing that we build ORTE and OPAL. In fact, don't we always build them anyway ?
  • Portals v4 should be Portals4
  • Shared [M|m]emory : maybe it would be good to merge those lines, like : "Shared Memory : SYSV MMAP CMA XPMEM KNEM" (provided we can have more than one CMA/XPMEM/KNEM). I think SYSV or MMAP (or something else) should be mentioned since the user could think that shared memory support is not built and even if it is always built, he may want to know which mechanism we use.

@jsquyres jsquyres force-pushed the pr/updates-to-config-summary branch from 5b42519 to d5919d7 Compare March 10, 2016 18:16
@jsquyres
Copy link
Member Author

Ok, good points:

  • fixed shared [Mm]emory
  • fixed portals4
  • it's not immediately easy to put all the shared memory on a single line
  • add rsh/ssh
  • add shared memory/copy in+copy out
  • removed OPAL/ORTE
  • listed all MPI bindings
  • still don't list CUDA
Open MPI configuration:
-----------------------
Version: 3.0.0a1
Build MPI C bindings: yes
Build MPI C++ bindings (deprecated): no
Build MPI Fortran bindings: mpif.h, use mpi, use mpi_f08
Build MPI Java bindings (experimental): yes
Build Open SHMEM support: no
Debug build: yes
Platform file: (none)

Transports
-----------------------
Cray uGNI (Gemini/Aries): no
Intel Omnipath (PSM2): no
Intel SCIF: no
Mellanox MXM: no
Open UCX: no
OpenFabrics Libfabric: yes
OpenFabrics Verbs: yes
Portals4: no
QLogic Infinipath (PSM): no
Shared memory/copy in+copy out: yes
Shared memory/Linux CMA: no
Shared memory/Linux KNEM: no
Shared memory/XPMEM: no
TCP: yes

Resource Managers
-----------------------
Cray Alps: no
Grid Engine: no
LSF: no
Slurm: yes
ssh/rsh: yes
Torque: no

*****************************************************************************
 THIS IS A DEBUG BUILD!  DO NOT USE THIS BUILD FOR PERFORMANCE MEASUREMENTS!
*****************************************************************************

@bosilca
Copy link
Member

bosilca commented Mar 10, 2016

If space is a constraint why don't we list only what is on ?

@jsquyres
Copy link
Member Author

It's a conundrum:

  • Yes, space is at a premium.
  • But it's harder to recognize that you're missing something if it's not actually shown (vs. seeing it listed with "no").

Shrug. I don't know what the right answer is.

@sjeaugey
Copy link
Member

I personally like projects where I'm also shown what is not compiled, so that I know what I could have.
For example, XPMEM/CMA/... are optimizations. With that summary, I know I could do better in terms of intra-node transports but I may not care until I'm having intra-node bandwidth problems.

I've seen interesting outputs showing two lines per category, with one line "enabled" and one line "disabled". E.g. :

Transports
------------
enabled : OpenFabrics Libfabric, OpenFabrics Verbs, ...
disabled : Cray uGNI, ....

@jsquyres
Copy link
Member Author

Fair enough. That's a big revamp compared to the current code, and I can't really justify spending the time to do that (doing these minor tweaks/updates was fine). 😄

If you want to change the output to do the enabled/disabled lines line you suggest, feel free to submit a PR... 😃

@sjeaugey
Copy link
Member

I know. Let's keep it that way for now and improve it when it is needed. I'll be happy to submit a PR at that time.

Regarding CUDA, I'm good with George suggested modification. Do you need a patch or a PR ?

@jsquyres
Copy link
Member Author

Oh, I missed the @bosilca suggestion about CUDA. Let me add something along those lines... I actually think it should be in a separate section -- CUDA is not a transport.

@jsquyres jsquyres force-pushed the pr/updates-to-config-summary branch from d5919d7 to 92212d4 Compare March 10, 2016 20:16
@jsquyres
Copy link
Member Author

Ok, CUDA support is now in there:

Open MPI configuration:
-----------------------
Version: 3.0.0a1
Build MPI C bindings: yes
Build MPI C++ bindings (deprecated): no
Build MPI Fortran bindings: mpif.h, use mpi, use mpi_f08
Build MPI Java bindings (experimental): yes
Build Open SHMEM support: no
Debug build: yes
Platform file: (none)

Miscellaneous
-----------------------
CUDA support: no

Transports
-----------------------
Cray uGNI (Gemini/Aries): no
Intel Omnipath (PSM2): no
Intel SCIF: no
Mellanox MXM: no
Open UCX: no
OpenFabrics Libfabric: yes
OpenFabrics Verbs: yes
Portals4: no
QLogic Infinipath (PSM): no
Shared memory/copy in+copy out: yes
Shared memory/Linux CMA: no
Shared memory/Linux KNEM: no
Shared memory/XPMEM: no
TCP: yes

Resource Managers
-----------------------
Cray Alps: no
Grid Engine: no
LSF: no
Slurm: yes
ssh/rsh: yes
Torque: no

*****************************************************************************
 THIS IS A DEBUG BUILD!  DO NOT USE THIS BUILD FOR PERFORMANCE MEASUREMENTS!
*****************************************************************************

@jsquyres
Copy link
Member Author

@rhc54 Should:

QLogic Infinipath (PSM): no

be

Intel TrueScale (PSM): no

?

@bosilca
Copy link
Member

bosilca commented Mar 10, 2016

👍

1 similar comment
@sjeaugey
Copy link
Member

👍

@jsquyres jsquyres force-pushed the pr/updates-to-config-summary branch from 92212d4 to 48c650c Compare March 10, 2016 21:02
@jsquyres
Copy link
Member Author

@rhc54 Confirmed in IM: yes, it should be Intel TrueScale. Fix pushed -- waiting for CI before merging...

jsquyres added a commit that referenced this pull request Mar 10, 2016
configury: minor updates to config summary output
@jsquyres jsquyres merged commit 6f17b46 into open-mpi:master Mar 10, 2016
@jsquyres jsquyres deleted the pr/updates-to-config-summary branch March 10, 2016 22:29
@matcabral
Copy link
Contributor

Catching a little late, re-confirming a yes for Intel TrueScale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants