Skip to content

Conversation

@artpol84
Copy link
Contributor

For some of our configuration this flag increases per-process contribution
by ~20% while it is not being used currently.

The consumer of this flag was communicator ID calculation logic, but it was
changed in 0bf06de.

Signed-off-by: Artem Polyakov artpol84@gmail.com
(cherry picked from commit b601dd5)

For some of our configuration this flag increases per-process contribution
by ~20% while it is not being used currently.

The consumer of this flag was communicator ID calculation logic, but it was
changed in 0bf06de.

Signed-off-by: Artem Polyakov <artpol84@gmail.com>
(cherry picked from commit b601dd5)
@artpol84 artpol84 added this to the v3.1.0 milestone Feb 19, 2018
@artpol84 artpol84 self-assigned this Feb 19, 2018
@artpol84 artpol84 requested review from jjhursey and rhc54 February 19, 2018 19:51
Copy link
Contributor

@rhc54 rhc54 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't find a MODEX_RECV for MPI_THREAD_LEVEL, but I did find:
ompi/mpi/c/init.c
54: if (NULL != (env = getenv("OMPI_MPI_THREAD_LEVEL"))) {

Does that need to be removed? I couldn't find where it gets set.

@artpol84
Copy link
Contributor Author

@rhc54 unfortunately I don't have a spare cycle to investigate that. And I don't want to change it blindly.
With this thing we are getting some improvement because modex size is reduced at least by 20B.
With env - I don't think we are getting any perf improvement.
I think it should be a separate PR.

@rhc54
Copy link
Contributor

rhc54 commented Feb 20, 2018

@bwbarrett @jjhursey I'll have to defer to someone else to investigate, then - I don't know if somehow these things are tied together and your tests simply aren't tripping over it. Could be someone out there has an idea?

@artpol84
Copy link
Contributor Author

@rhc54 if you can't find modex recv, I think this means that presence of this information in the modex doesn't play a role. Isn't it?

@bwbarrett
Copy link
Member

bot:ompi:retest

For some reason, the OMPI tests tried to pull a commit that doesn't exist. Very weird.

@artpol84
Copy link
Contributor Author

I mean that rank don't need to care about someone else's threading level, but it still can be concerned about its own threading level which might be exported through env.
So my believe that those things are unrelated. If you think that they are - I can double check that as it is related to this PR.

@bosilca
Copy link
Member

bosilca commented Feb 20, 2018

My recollection is that OMPI_MPI_THREAD_LEVEL allows to start the MPI library in full threading mode even if the user calls MPI_Init. According to our 2015 developer meeting notes we were using it to test for correctness on all nightly tests (at least at Cisco and LANL).

@rhc54
Copy link
Contributor

rhc54 commented Feb 20, 2018

@bosilca Appreciate the explanation - I couldn't see where it was being set, nor if it was somehow related to the exchange. Sounds like it is unrelated.

@artpol84
Copy link
Contributor Author

@bosilca indeed thanks for the explanation!

@bwbarrett bwbarrett merged commit ecac676 into open-mpi:v3.1.x Feb 20, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants