Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added test for SolutionTransfer with hp-refinement and manually set flags. #14781

Merged
merged 1 commit into from
Feb 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
146 changes: 146 additions & 0 deletions tests/mpi/solution_transfer_11.cc
Original file line number Diff line number Diff line change
@@ -0,0 +1,146 @@
// ---------------------------------------------------------------------
//
// Copyright (C) 2023 by the deal.II authors
//
// This file is part of the deal.II library.
//
// The deal.II library is free software; you can use it, redistribute
// it, and/or modify it under the terms of the GNU Lesser General
// Public License as published by the Free Software Foundation; either
// version 2.1 of the License, or (at your option) any later version.
// The full text of the license can be found in the file LICENSE at
// the top level of the deal.II distribution.
//
// ---------------------------------------------------------------------



// Test distributed SolutionTransfer with hp-refinement and manually set flags.


#include <deal.II/distributed/solution_transfer.h>
#include <deal.II/distributed/tria.h>

#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>

#include <deal.II/fe/fe_q.h>

#include <deal.II/grid/grid_generator.h>

#include <deal.II/hp/fe_collection.h>

#include <deal.II/lac/la_parallel_vector.h>

#include "../tests.h"


template <int dim>
void
test()
{
parallel::distributed::Triangulation<dim> tria(MPI_COMM_WORLD);
GridGenerator::subdivided_hyper_cube(tria, 2);
tria.refine_global(1);

hp::FECollection<dim> fes;
for (unsigned int d = 1; d <= 3; ++d)
fes.push_back(FE_Q<dim>(d));

DoFHandler<dim> dofh(tria);
for (const auto &cell : dofh.active_cell_iterators())
if (cell->is_locally_owned())
cell->set_active_fe_index(1);

dofh.distribute_dofs(fes);
IndexSet locally_owned_dofs = dofh.locally_owned_dofs();
IndexSet locally_relevant_dofs =
DoFTools::extract_locally_relevant_dofs(dofh);

// set up solution
LinearAlgebra::distributed::Vector<double> solution;
solution.reinit(locally_owned_dofs,
locally_relevant_dofs,
dofh.get_communicator());

for (unsigned int i = 0; i < solution.size(); ++i)
if (locally_owned_dofs.is_element(i))
solution(i) = i;
solution.update_ghost_values();

double l1_norm = solution.l1_norm();
if (Utilities::MPI::this_mpi_process(dofh.get_communicator()) == 0)
deallog << "pre refinement l1=" << l1_norm << std::endl;

// set refine/coarsen flags manually
for (const auto &cell : dofh.active_cell_iterators())
if (cell->is_locally_owned())
{
const std::string parent_id_string = cell->parent()->id().to_string();

if (parent_id_string == "0_0:")
{
// parent 0: h refinement
cell->set_refine_flag();
}
else if (parent_id_string == "1_0:")
{
// parent 1: h coarsening
cell->set_coarsen_flag();
}
else if (parent_id_string == "2_0:")
{
// parent 2: p refinement
const auto super_fe_index =
fes.next_in_hierarchy(cell->active_fe_index());
cell->set_future_fe_index(super_fe_index);
}
else if (parent_id_string == "3_0:")
{
// parent 3: p coarsening
const auto sub_fe_index =
fes.previous_in_hierarchy(cell->active_fe_index());
cell->set_future_fe_index(sub_fe_index);
}
else
{
Assert(false, ExcInternalError());
}
}

// initiate refinement and transfer
parallel::distributed::
SolutionTransfer<dim, LinearAlgebra::distributed::Vector<double>>
soltrans(dofh);
soltrans.prepare_for_coarsening_and_refinement(solution);

tria.execute_coarsening_and_refinement();

dofh.distribute_dofs(fes);
locally_owned_dofs = dofh.locally_owned_dofs();
locally_relevant_dofs = DoFTools::extract_locally_relevant_dofs(dofh);

solution.reinit(locally_owned_dofs,
locally_relevant_dofs,
dofh.get_communicator());
soltrans.interpolate(solution);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the end of the test. Would it make sense to compute some diagnostic information from the so-interpolated vector? Say its l1 norm or some something one could output? Right now, all you test is that the test doesn't abort.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added the output of the l1 norm pre and post refinement.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks!


l1_norm = solution.l1_norm();
if (Utilities::MPI::this_mpi_process(dofh.get_communicator()) == 0)
deallog << "post refinement l1=" << l1_norm << std::endl;

// make sure no processor is hanging
MPI_Barrier(MPI_COMM_WORLD);

deallog << "OK" << std::endl;
}


int
main(int argc, char **argv)
{
Utilities::MPI::MPI_InitFinalize mpi_initialization(argc, argv, 1);
MPILogInitAll log;

test<2>();
}
7 changes: 7 additions & 0 deletions tests/mpi/solution_transfer_11.mpirun=2.output
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@

DEAL:0::pre refinement l1=3240.00
DEAL:0::post refinement l1=4390.76
DEAL:0::OK

DEAL:1::OK