New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RPE::evaluate_and_process(): pack only if needed #15163
base: master
Are you sure you want to change the base?
RPE::evaluate_and_process(): pack only if needed #15163
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not in favor of this patch. We already have Utilities::MPI::isend()/recv()
. If the overhead of packing is too large for these, then we should write a fast path for these functions rather than duplicate the functionality into other functions.
template <typename T> | ||
std::enable_if_t<Utilities::MPI::is_mpi_type<T> == true, void> | ||
pack_and_isend(T * data, | ||
const unsigned int size, | ||
const unsigned int rank, | ||
const unsigned int tag, | ||
const MPI_Comm comm, | ||
std::vector<std::vector<char>> &buffers, | ||
std::vector<MPI_Request> & requests) | ||
{ | ||
requests.emplace_back(MPI_Request()); | ||
|
||
buffers.emplace_back( | ||
Utilities::pack(std::vector<T>(data, data + size), false)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You have this the wrong way around: This is the function with Utilities::MPI::is_mpi_type<T> == true
for which you specifically don't have to pack, whereas for the function above where you don't pack you actually do have to pack.
8993018
to
70f9e5b
Compare
70f9e5b
to
0e74f25
Compare
@peterrum How do we proceed with this PR? |
Shift around More ArrayView Specialize code for tensors Allow to specify components Also for evaluate_and_process() Update Reduce number of sweeps
f642dd1
to
bc18550
Compare
I tried the new code with the additional restructuring and I am very happy about the performance gain we can get (together with #16896, #16895, I see an improvement of around a factor of 2 in the non-nested multigrid algorithm on a server processor). I will look into the code once you think it is complete enough. |
follow up to #15156