-
-
Couldn't load subscription status.
- Fork 29
Add support for use as part of a wider MPI process #75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The test failures shown seem to be part of the original code that is not affected by the changes I've made, and are associated with a timing test. I guess this might be associated with github actions having slowed down since these were written? |
|
Some of the errors seen in these tests are related to the bug mentioned in #74. |
|
@joezuntz: Can you pull recent updates from the |
|
I just did this but it seems to show only a single change, to cli.py - is that what you expected? |
|
Also it says it's "awaiting approval" now: "First-time contributors need a maintainer to approve running workflows" |
|
@joezuntz: Yay! Tests pass, and the failures had nothing to do with your code. |
|
@joezuntz: Is this ready to merge, then? |
|
Yep, I think good to go - thanks! |
|
Thanks @joezuntz! |
The current version of the code always uses MPI_COMM_WORLD as the communicator and calls sys.exit on some processes when the client process exits python.
This change makes it possible to use dask-mpi for part of an MPI process but then return to the original MPI workflow once it is complete. It requires the user to manually call
send_close_signalfrom the client process, and to use a returned value indicating whether the process is the client or not.