Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Distributed setup calls _make_cross_face_batches once per face #36

Closed
inducer opened this issue Aug 14, 2020 · 4 comments · Fixed by #40
Closed

Distributed setup calls _make_cross_face_batches once per face #36

inducer opened this issue Aug 14, 2020 · 4 comments · Fixed by #40

Comments

@inducer
Copy link
Owner

inducer commented Aug 14, 2020

Here. It definitely shouldn't. As the name possibly suggests, _make_cross_face_batches is supposed to be a batched routine. (called once per rank pair) @MTCam spotted this when doing some parallel runs.

This is actually pretty bad, since it creates lots of small interpolation batches rather than one big one.

@inducer
Copy link
Owner Author

inducer commented Aug 14, 2020

@majosm, would you be willing to take a look at this?

@inducer
Copy link
Owner Author

inducer commented Aug 15, 2020

This is actually pretty bad, since it creates lots of small interpolation batches rather than one big one.

The impact of that is that MPI buffer packing is essentially a Python loop over elements/faces. (and it obviously shouldn't be)

@MTCam
Copy link
Contributor

MTCam commented Aug 17, 2020

Doesn't this issue also affect wave-eager examples?

@inducer
Copy link
Owner Author

inducer commented Aug 17, 2020 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
2 participants