Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Choose to initialize ghost elements with reinit(partitioner). #14536

Merged
merged 1 commit into from
May 31, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
5 changes: 5 additions & 0 deletions doc/news/changes/minor/20221205Fehling
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
New: TrilinosWrappers::MPI::Vector and PETScWrappers::MPI::Vector now both have
reinit functions that take a Utilities::MPI::Partitioner as an argument, so
their interface is compatible to LinearAlgebra::distributed::Vector.
<br>
(Marc Fehling, 2022/12/05)
12 changes: 12 additions & 0 deletions include/deal.II/lac/la_parallel_vector.h
Original file line number Diff line number Diff line change
Expand Up @@ -405,6 +405,18 @@ namespace LinearAlgebra
const std::shared_ptr<const Utilities::MPI::Partitioner> &partitioner,
const MPI_Comm &comm_sm = MPI_COMM_SELF);

/**
* This function exists purely for reasons of compatibility with the
* PETScWrappers::MPI::Vector and TrilinosWrappers::MPI::Vector classes.
*
* It calls the function above, and ignores the parameter @p make_ghosted.
*/
void
reinit(
const std::shared_ptr<const Utilities::MPI::Partitioner> &partitioner,
const bool make_ghosted,
const MPI_Comm &comm_sm = MPI_COMM_SELF);

/**
* Initialize vector with @p local_size locally-owned and @p ghost_size
* ghost degrees of freedoms.
Expand Down
12 changes: 12 additions & 0 deletions include/deal.II/lac/la_parallel_vector.templates.h
Original file line number Diff line number Diff line change
Expand Up @@ -671,6 +671,18 @@ namespace LinearAlgebra



template <typename Number, typename MemorySpaceType>
void
Vector<Number, MemorySpaceType>::reinit(
const std::shared_ptr<const Utilities::MPI::Partitioner> &partitioner_in,
const bool /*make_ghosted*/,
const MPI_Comm &comm_sm)
{
this->reinit(partitioner_in, comm_sm);
}



template <typename Number, typename MemorySpaceType>
Vector<Number, MemorySpaceType>::Vector()
: partitioner(std::make_shared<Utilities::MPI::Partitioner>())
Expand Down
6 changes: 5 additions & 1 deletion include/deal.II/lac/petsc_vector.h
Original file line number Diff line number Diff line change
Expand Up @@ -343,10 +343,14 @@ namespace PETScWrappers
/**
* Initialize the vector given to the parallel partitioning described in
* @p partitioner.
*
* You can decide whether your vector will contain ghost elements with
* @p make_ghosted.
*/
void
reinit(
const std::shared_ptr<const Utilities::MPI::Partitioner> &partitioner);
const std::shared_ptr<const Utilities::MPI::Partitioner> &partitioner,
const bool make_ghosted = true);

/**
* Return a reference to the MPI communicator object in use with this
Expand Down
7 changes: 7 additions & 0 deletions include/deal.II/lac/trilinos_vector.h
Original file line number Diff line number Diff line change
Expand Up @@ -604,10 +604,17 @@ namespace TrilinosWrappers
/**
* Initialize the vector given to the parallel partitioning described in
* @p partitioner using the function above.
*
* You can decide whether your vector will contain ghost elements with
* @p make_ghosted.
*
* The parameter @p vector_writable only has effect on ghosted vectors
* and is ignored for non-ghosted vectors.
*/
void
reinit(
const std::shared_ptr<const Utilities::MPI::Partitioner> &partitioner,
const bool make_ghosted = true,
const bool vector_writable = false);

/**
Expand Down
21 changes: 17 additions & 4 deletions source/lac/petsc_parallel_vector.cc
Original file line number Diff line number Diff line change
Expand Up @@ -236,11 +236,24 @@ namespace PETScWrappers

void
Vector::reinit(
const std::shared_ptr<const Utilities::MPI::Partitioner> &partitioner)
const std::shared_ptr<const Utilities::MPI::Partitioner> &partitioner,
const bool make_ghosted)
{
this->reinit(partitioner->locally_owned_range(),
partitioner->ghost_indices(),
partitioner->get_mpi_communicator());
if (make_ghosted)
{
Assert(partitioner->ghost_indices_initialized(),
ExcMessage("You asked to create a ghosted vector, but the "
"partitioner does not provide ghost indices."));

this->reinit(partitioner->locally_owned_range(),
partitioner->ghost_indices(),
partitioner->get_mpi_communicator());
}
else
{
this->reinit(partitioner->locally_owned_range(),
partitioner->get_mpi_communicator());
}
Comment on lines +242 to +256
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not:

Suggested change
if (make_ghosted)
{
Assert(partitioner->ghost_indices_initialized(),
ExcMessage("You asked to create a ghosted vector, but the "
"partitioner does not provide ghost indices."));
this->reinit(partitioner->locally_owned_range(),
partitioner->ghost_indices(),
partitioner->get_mpi_communicator());
}
else
{
this->reinit(partitioner->locally_owned_range(),
partitioner->get_mpi_communicator());
}
if (partitioner->ghost_indices_initialized())
{
this->reinit(partitioner->locally_owned_range(),
partitioner->ghost_indices(),
partitioner->get_mpi_communicator());
}
else
{
this->reinit(partitioner->locally_owned_range(),
partitioner->get_mpi_communicator());
}

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

With the bool parameter you can still create a non-ghosted vector out of a partitioner with ghost elements.

With your change, I would need two different partitioners: one with and one without ghost elements. That would work with me as well, but the question is whether we want that. I haven't worked much with the partitioner class: Is it common in user-code to have multiple partitioners like this?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After thinking and fiddling about this for a little longer, I find that your suggestion is the way to go.The reinit function should correspond to whatever the partitioner provides. I will adjust this PR and the documentation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...well I thought so. With the new suggestion I encountered some strange behavior.

I initialize the system_rhs and pass it to AffineConstraints::distribute_local_to_global(cell_matrix, cell_rhs, local_dof_indices, system_matrix, system_rhs) which causes problems:

  • Initialized with ghost elements, it fails when system_rhs is a PETSc/Trilinos Vector, and succeeds when it is a LA::distributed::Vector
An error occurred in line <1664> of file </raid/fehling/dealii/include/deal.II/lac/trilinos_vector.h> in function
    void dealii::TrilinosWrappers::MPI::Vector::add(const std::vector<unsigned int>&, const std::vector<double>&)
The violated condition was: 
    !has_ghost_elements()
Additional information: 
    You are trying an operation on a vector that is only allowed if the
    vector has no ghost elements, but the vector you are operating on does
    have ghost elements. Specifically, vectors with ghost elements are
    read-only and cannot appear in operations that write into these
    vectors.
    
    See the glossary entry on 'Ghosted vectors' for more information.

Stacktrace:
-----------
#0  dealii-9.5.0-pre/lib/libdeal_II.g.so.9.5.0-pre: dealii::TrilinosWrappers::MPI::Vector::add(std::vector<unsigned int, std::allocator<unsigned int> > const&, std::vector<double, std::allocator<double> > const&)
#1  dealii-9.5.0-pre/lib/libdeal_II.g.so.9.5.0-pre: void dealii::AffineConstraints<double>::distribute_local_to_global<dealii::TrilinosWrappers::SparseMatrix, dealii::TrilinosWrappers::MPI::Vector>(dealii::FullMatrix<double> const&, dealii::Vector<double> const&, std::vector<unsigned int, std::allocator<unsigned int> > const&, dealii::TrilinosWrappers::SparseMatrix&, dealii::TrilinosWrappers::MPI::Vector&, bool, std::integral_constant<bool, false>) const
  • Initialized without ghost elements, it succeeds when system_rhs is a PETSc/Trilinos Vector, and fails when it is a LA::distributed::Vector
An error occurred in line <1675> of file </raid/fehling/bin/dealii-9.5.0-pre/include/deal.II/lac/la_parallel_vector.h> in function
    Number& dealii::LinearAlgebra::distributed::Vector<Number, MemorySpace>::operator()(dealii::LinearAlgebra::distributed::Vector<Number, MemorySpace>::size_type) [with Number = double; MemorySpace = dealii::MemorySpace::Host; dealii::LinearAlgebra::distributed::Vector<Number, MemorySpace>::size_type = unsigned int]
The violated condition was: 
    partitioner->in_local_range(global_index) || partitioner->ghost_indices().is_element(global_index)
Additional information: 
    You tried to access element 4936 of a distributed vector, but this
    element is not stored on the current processor. Note: The range of
    locally owned elements is [6305,12544], and there are 0 ghost elements
    that this vector can access.
    
    A common source for this kind of problem is that you are passing a
    'fully distributed' vector into a function that needs read access to
    vector elements that correspond to degrees of freedom on ghost cells
    (or at least to 'locally active' degrees of freedom that are not also
    'locally owned'). You need to pass a vector that has these elements as
    ghost entries.

Stacktrace:
-----------
#0  hprun: dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>::operator()(unsigned int)
#1  dealii-9.5.0-pre/lib/libdeal_II.g.so.9.5.0-pre: dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>::add(std::vector<unsigned int, std::allocator<unsigned int> > const&, std::vector<double, std::allocator<double> > const&)
#2  dealii-9.5.0-pre/lib/libdeal_II.g.so.9.5.0-pre: void dealii::AffineConstraints<double>::distribute_local_to_global<dealii::TrilinosWrappers::SparseMatrix, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> >(dealii::FullMatrix<double> const&, dealii::Vector<double> const&, std::vector<unsigned int, std::allocator<unsigned int> > const&, dealii::TrilinosWrappers::SparseMatrix&, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>&, bool, std::integral_constant<bool, false>) const

I'm puzzled. I feel tempted to remove the last commit in this PR as it worked. What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we indeed need to enable the user to control where to take into account the ghosts, so the current solution with the additional flag seems right to me.

}


Expand Down
21 changes: 17 additions & 4 deletions source/lac/trilinos_vector.cc
Original file line number Diff line number Diff line change
Expand Up @@ -410,12 +410,25 @@ namespace TrilinosWrappers
void
Vector::reinit(
const std::shared_ptr<const Utilities::MPI::Partitioner> &partitioner,
const bool make_ghosted,
const bool vector_writable)
{
this->reinit(partitioner->locally_owned_range(),
partitioner->ghost_indices(),
partitioner->get_mpi_communicator(),
vector_writable);
if (make_ghosted)
{
Assert(partitioner->ghost_indices_initialized(),
ExcMessage("You asked to create a ghosted vector, but the "
"partitioner does not provide ghost indices."));

this->reinit(partitioner->locally_owned_range(),
partitioner->ghost_indices(),
partitioner->get_mpi_communicator(),
vector_writable);
}
else
{
this->reinit(partitioner->locally_owned_range(),
partitioner->get_mpi_communicator());
}
}


Expand Down