Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MGTransferMF: check compatibility of DoFHandlers #16088

Merged
merged 4 commits into from Oct 11, 2023

Conversation

peterrum
Copy link
Member

@peterrum peterrum commented Oct 4, 2023

When using multigrid as preconditioner, it is possible that the outer solver and multigrid are set up with different numberings in the DoFHandlers (e.g. by using DoFRenumbering::matrix_free_data_locality once with MatrixFree set up with double and once with float). In this case, it is not enough to simply copy the vectors during MGTransferMF::copy_to_mg(), MGTransferMF::copy_to_mg(), and MGTransferMF::interpolate_to_mg(), but one needs to permute the data. This is implemented now for global coarsening. A similar infrastructure was already available for local smoothing.

doc/news/changes/minor/20231003Munch Outdated Show resolved Hide resolved
doc/news/changes/minor/20231003Munch Outdated Show resolved Hide resolved
include/deal.II/multigrid/mg_transfer_global_coarsening.h Outdated Show resolved Hide resolved
* is required to be able to use MGTransferMF and
* MGTransferMatrixFree as template argument.
* Interpolate fine-mesh field @p src to each multigrid level in
* @p dof_handler and store the result in @p dst.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does dof_handler in this comment refer to? I assume you mean the one held in the class?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the comments in interpolate_to_mg(), since they are outdated. Could you check if the notes are helpful.

include/deal.II/multigrid/mg_transfer_global_coarsening.h Outdated Show resolved Hide resolved
Comment on lines +4118 to +4136
if (const auto t = dynamic_cast<
const MGTwoLevelTransfer<dim,
LinearAlgebra::distributed::Vector<Number>> *>(
this->transfer[this->transfer.max_level()].get()))
{
return {t->dof_handler_fine, t->mg_level_fine};
}
else if (const auto t = dynamic_cast<const MGTwoLevelTransferNonNested<
dim,
LinearAlgebra::distributed::Vector<Number>> *>(
this->transfer[this->transfer.max_level()].get()))
{
return {t->dof_handler_fine, t->mg_level_fine};
}
else
{
Assert(false, ExcNotImplemented());
return {nullptr, numbers::invalid_unsigned_int};
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the logic in this function is harder to read than necessary because you have two different return values but four cases. Can that we re-organized?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How do you suggest? The problem is that the base class of MGTwoLevelTransfer and MGTwoLevelTransferNonNested does not have dim as template argument so that we need to cast to the two implementations.

@peterrum
Copy link
Member Author

peterrum commented Oct 5, 2023

/rebuild

Copy link
Member

@kronbichler kronbichler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let us go with this solution. I did reason about combining the if statements in the one open discussion a bit more, but it does not make a big difference, so let's not bother with it.

@kronbichler kronbichler merged commit e56363a into dealii:master Oct 11, 2023
15 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants