New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix MatrixFree::find_vector_in_mf for multiple DoFHandler/AffineConstrains #12025
Comments
I agree with this direction; however, I think one could still construct scenarios where one process has all the same same data locally (ind any of those checks) and only sees a global difference. I think of the case of 3 MPI ranks where there is a difference in the ghost exchange between rank 1 and 2, invisible to rank 0. Rank 0 could then return the value The more general way to do things to also cover unusual cases might be to create some hash/sha of the ghost states among all MPI ranks that we store in the partitioner class and compute in |
Regarding #12033 (review):
Let me quickly explain why I chose the approach I have taken (step 1: check the pointers; step 2: check via If user insists on creating their own partitioners, they will be penalized (by a global communication). Do I miss instances where users will be penalized unjustifiably? |
@kronbichler Would it be tragic if the current approach becomes part of the next release? |
@kronbichler This is in a certain sense related to #12012. Do we want to tackle the hashs before the release or is that too dangerous? |
I think the hashes are too dangerous, and I think we did supply a good option for the default case. Let us postpone it for now to get it out of the release countdown. |
I think we will not get around to this one for this release, so we need to postpone it. |
@kronbichler I think we can close this issue. Using |
I agree, we have a fairly robust infrastructure in place. Let us close this issue. |
The following lines might return the wrong partitioner if
MatrixFree
has been set up with multipleDoFHandler
andAffineConstraint
object:dealii/include/deal.II/matrix_free/matrix_free.h
Lines 3284 to 3291 in 20530a7
with the consequence that different processes use different partitioners with different communication patterns, possibly leading to MPI errors of the type (see also https://github.com/MeltPoolDG/MeltPoolDG/pull/117#issuecomment-811762113):
By checking more internal fields in
Partitioner::is_compatible()
this issue could be solved (without the need to perform a global communication):If we should decide to make this change, we probably need to reduce the number of checks. What are alternative approaches to tackle the described problem?
FYI @nmuch @mschreter
The text was updated successfully, but these errors were encountered: