-
Notifications
You must be signed in to change notification settings - Fork 99
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Finalize T8codeMesh before MPI is finalized #1585
Comments
Could this be helpful? https://juliaparallel.org/MPI.jl/stable/reference/environment/#MPI.add_finalize_hook! |
There seem to be two cases where clean-up code could be called:
It would be nice to run the first in a long-running (interactive) Julia session using multiple meshes. However, only the second one is guaranteed to be called before MPi is finalized - finalizer hooks are called in How would you weight the requirement to avoid piling up garbage without a classical finalizer, @sloede? Would an approach like adding both a classical finalizer and an MPI finalize hook work - where both use the same implementation under the hood and check whether they need to clean up at all? |
I just implemented a version with the MPI finalizer hook. It works as intended. But indeed, this does not solve the problem with long-running sessions. For that I did not find a satisfying solution yet on how the two finalizers know about each other. |
I would probably continue to use the normal, per-object finalizer function, but guard the actual calls to any MPI functions behind |
Thanks for the thought, @sloede ! This is my proposed solution: dde2802 I still want to make sure that
|
This sounds really good. Would you be willing to make another PR for the |
There still might be one issue which I haven't considered yet. When creating a closure like this
then there is still a reference to |
Good point! So we should remove the finalizer hook again and just keep the ordinary finalizer with the check whether MPI has already been finalized in place? |
Yes, we should do that. I made the |
When finalizing the
cmesh
object int8code
some MPI calls are made since some shared memory arrays must be freed.Naturally, this has to happen before MPI itself shuts down. Otherwise you get one of the following error messages:
Julia uses a garbage collector, thus the order of finalization of objects is not determined by the program flow.
Is there a possibility in Julia resp. Trixi to deterministically finalize modules/objects before MPI shuts down?
The text was updated successfully, but these errors were encountered: