New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to average values in parallel::distributed::SolutionTransfer #13457
Allow to average values in parallel::distributed::SolutionTransfer #13457
Conversation
2946347
to
9b271cc
Compare
/rebuild |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have an issue that explains what the underlying problem is? I'm still confused why there is a problem. What SolutionTransfer
does (should do) is a relatively simple operation: Interpolation. That's fundamentally a local operation on cells, though we sometimes do the same operation on adjacent cells for DoFs that are on the interface. Did someone ever come up with a simple testcase that shows what goes wrong and why?
(Not opposed to the patch, just trying to understand what is happening.)
I only briefly read over the From a design standpoint, I would suggest to introduce a separate function I find that the current code with the |
The following code triggers an assert with 2 processes: #include <deal.II/base/mpi.h>
#include <deal.II/distributed/solution_transfer.h>
#include <deal.II/distributed/tria.h>
#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>
#include <deal.II/fe/fe_q.h>
#include <deal.II/grid/grid_generator.h>
using namespace dealii;
int
main(int argc, char **argv)
{
constexpr unsigned int dim = 2;
constexpr unsigned int degree = 1;
using Number = double;
Utilities::MPI::MPI_InitFinalize mpi_init(argc, argv, 1);
parallel::distributed::Triangulation<dim> tria(MPI_COMM_WORLD);
GridGenerator::subdivided_hyper_rectangle(tria,
{2, 1},
{0., 0.0},
{2.0, 1.0});
if (tria.create_cell_iterator(CellId("0_0:"))->is_locally_owned())
tria.create_cell_iterator(CellId("0_0:"))->set_refine_flag();
tria.execute_coarsening_and_refinement();
DoFHandler<dim> dof_handler(tria);
dof_handler.distribute_dofs(FE_Q<dim>(degree));
LinearAlgebra::distributed::Vector<Number> v;
v.reinit(dof_handler.locally_owned_dofs(),
DoFTools::extract_locally_relevant_dofs(dof_handler),
MPI_COMM_WORLD);
std::map<CellId, Vector<double>> values;
values[CellId("1_0:")] = Vector<double>{0.5, 0.5, 0.5, 0.5};
values[CellId("0_1:0")] = Vector<double>{0.5, 0.5, 0.5, 0.5};
values[CellId("0_1:1")] = Vector<double>{0.5, 0.5, 0.5, 0.1};
values[CellId("0_1:2")] = Vector<double>{0.5, 0.5, 0.5, 0.5};
values[CellId("0_1:3")] = Vector<double>{0.5, 0.1, 0.5, 0.5};
for (const auto cell : dof_handler.active_cell_iterators())
if (cell->is_artificial() == false)
cell->set_dof_values(values[cell->id()], v);
v.print(std::cout);
parallel::distributed::
SolutionTransfer<dim, LinearAlgebra::distributed::Vector<Number>>
solution_trans(dof_handler);
solution_trans.prepare_for_coarsening_and_refinement(v);
if (tria.create_cell_iterator(CellId("1_0:"))->is_locally_owned())
tria.create_cell_iterator(CellId("1_0:"))->set_refine_flag();
tria.execute_coarsening_and_refinement();
dof_handler.distribute_dofs(FE_Q<dim>(degree));
v.reinit(dof_handler.locally_owned_dofs(),
DoFTools::extract_locally_relevant_dofs(dof_handler),
MPI_COMM_WORLD);
solution_trans.interpolate(v);
v.print(std::cout);
} The initial mesh consists of 2 coarse cells, of which one is refined. All entries but one are filled with 0.5. At the hanging-node, I set the value of 0.1. In second step, I refine the unrefined cell with the result that one side wants to write 0.1 and the other side 0.5. The problem is that only an assert is risen in parallel but not in serial, giving the impression that this is a parallel but which it is not. The problem could have been fixed by applying the hanging-node constraints. Let me split up the PR in two. One for the assert in serial, the other one for the averaging. |
for (const auto vec : all_out) | ||
*vec = 0.0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@peterrum For my code this line is problematic. I was relying on the fact that SolutionTransfer is not modifying entries that were previously associated with FE_Nothing
so that I can set initial conditions before calling SolutionTransfer::interpolate.
9b271cc
to
3941127
Compare
Since #12463 has been finally been merged, I have updated this PR.
@marcfehling I don't think that splitting up the functions would help. The result would be that too much of the logic would be duplicated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks okay to me. Let's wait for another approval.
@marcfehling I don't think that splitting up the functions would help. The result would be that too much of the logic would be duplicated.
Okay. But we really need to redesign the p:d:SolutionTransfer
class to come up to the new demands (activation of previous FE_Nothing elements, averaging interpolated values, etc). I will tackle that for release 9.5...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me. I think we should get this merged for the release, as it has been tagged all the way and has another approval.
It seems this PR fails on the complex build: https://cdash.dealii.43-1.org/viewBuildError.php?buildid=2301 @peterrum can you please take a look? |
I guess one would need to use at
VectorType from valence ). However, I don't know how to accomplish that. @kronbichler Do you have an idea how to do that?
|
I am not sure if we need to really create a vector of a different type. Wouldn't it be enough to replace the comparison |
Otherwise, the best bet would be to construct a |
…bal_by_interpolation Allow to average values in parallel::distributed::SolutionTransfer
After having worked on multiple independent projects where we use AMR and had issues with the assert in
SolutionTransfer
/Utilities::MPI::Partitioner
, I came to the conclusion thatSolutionTransfer
might be used in two contexts:In case 1), it would be nice to have a way to have checks in debug mode that the interpolated solution is indeed good. If the interpolated solution is not good, it is often a sign that the mesh is not well chosen and the adaption scheme has to be modifies. One way is to perform the interpolation on each cell independently and check that the contribution to each DoF does not differ too much. This is more or less done by
Utilities::MPI::Partitioner::compress(insert)
for interprocess interface DoFs. However, it is very confusing that we only get the assert if we run the code in parallel, although the issue is also present in the serial case (see #12463).This PR proposes to extend the
parallel::distributed::SolutionTransfer
in such a way that it can be executed in the two modi listed above. The changes for that are:Replaces #12463.
FYI @mschreter @dravinsi @vovannikov