-
Notifications
You must be signed in to change notification settings - Fork 707
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove parallel::TriangulationBase::compute_n_locally_owned_active_cellls_per_processor #9945
Remove parallel::TriangulationBase::compute_n_locally_owned_active_cellls_per_processor #9945
Conversation
This touches the bigger question of the same function in the DoFHandler: dealii/include/deal.II/dofs/dof_handler.h Lines 1057 to 1058 in 2ba7b4d
This was introduced before in #8298, so we have not released that part either and we could replace it as well. The only nuisance is the other function dealii/include/deal.II/dofs/dof_handler.h Lines 1034 to 1035 in 2ba7b4d
where we get an IndexSet and I am not sure whether all_gather works. In case it does, we should remove this lengthy implementationdealii/source/dofs/number_cache.cc Lines 121 to 229 in 2ba7b4d
and just kick out the functions altogether. |
@kronbichler After a quick look at the implementation of See: dealii/include/deal.II/base/mpi.h Lines 1199 to 1253 in fc54b91
|
Should I include |
They are all of the same kind, so I would say we either remove all or we remove none. I would vote fore removing them all. |
@kronbichler Do you think like this c41b635? Currently 57 tests fail (these use once of the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks almost exactly like what I had in mind, except that I would purge the functionality from NumberCache
and simply let the DoFHandler
compute that info? The function is deprecated and I think it would help later clean-up if there is no other class/struct involved any more.
if (number_cache.n_locally_owned_dofs_per_processor.empty() && | ||
number_cache.n_global_dofs > 0) | ||
{ | ||
MPI_Comm comm; | ||
|
||
const parallel::TriangulationBase<dim, spacedim> *tr = | ||
(dynamic_cast<const parallel::TriangulationBase<dim, spacedim> *>( | ||
&this->get_triangulation())); | ||
if (tr != nullptr) | ||
comm = tr->get_communicator(); | ||
else | ||
comm = MPI_COMM_SELF; | ||
|
||
const_cast<dealii::internal::DoFHandlerImplementation::NumberCache &>( | ||
number_cache) | ||
.n_locally_owned_dofs_per_processor = | ||
compute_n_locally_owned_dofs_per_processor(); | ||
number_cache.get_n_locally_owned_dofs_per_processor(comm); | ||
} | ||
return number_cache.n_locally_owned_dofs_per_processor; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this implementation, can't we bypass this code and simply return Utilities::MPI::all_gather(...)
?
Utilities::MPI::all_gather(mpi_communicator, | ||
dof_handler.locally_owned_dofs()), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is unrelated to this patch, but I believe we should replace this argument by simply dof_handler.locally_owned_dofs()
- I must have forgotten those in #9710.
const_cast<dealii::internal::DoFHandlerImplementation::NumberCache &>( | ||
number_cache) | ||
.locally_owned_dofs_per_processor = | ||
compute_locally_owned_dofs_per_processor(); | ||
number_cache.get_locally_owned_dofs_per_processor(comm); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same here.
const_cast<dealii::internal::DoFHandlerImplementation::NumberCache &>( | ||
mg_number_cache[level]) | ||
.locally_owned_dofs_per_processor = | ||
compute_locally_owned_mg_dofs_per_processor(level); | ||
mg_number_cache[level].get_locally_owned_dofs_per_processor(comm); | ||
} | ||
return mg_number_cache[level].locally_owned_dofs_per_processor; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
and here
#endif | ||
return locally_owned_dofs_per_processor; | ||
return Utilities::MPI::all_gather(mpi_communicator, | ||
locally_owned_dofs); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here you use this function, but I think we can remove the function altogether - this class is mostly internal anyway, I think.
@kronbichler Thanks for the fast feedback.
Remember back why this functions caused us headaches half a year ago: reason 1) memory consumption scales with the number processes and reason 2) the function does not return a vector but a reference to a vector. Latter was the reason why we fill the vector only when needed. I would remove the vector once the deprecated functions are deleted. I.e in two weeks? |
Sure, the vector can only be removed once we have eliminated the deprecated function. But the number cache can be cleaned up already in terms of the unnecessary function - but I agree on a second sight that it is probably no big difference since the vector is in |
|
c41b635
to
14c3053
Compare
…lls_per_processor
14c3053
to
78bdbb1
Compare
In 5eb3561, we have replaced the method
n_locally_owned_active_cells_per_processor()
bycompute_n_locally_owned_active_cells_per_processor()
since we did not want to store the information in theNumberCache
anymore (since it is too expensive for large simulations). I think it would be an option to remove the new function since the same effect can be reached in the user code by the following one liner:Utilities::MPI::all_gather(tr.get_communicator(), tr.n_locally_owned_active_cells());
I am not sure related backwards compatibility. But, I guess the change in 5eb3561 was not backwards compatible either.