New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Choose to initialize ghost elements with reinit(partitioner). #14536
Conversation
/rebuild |
if (make_ghosted) | ||
{ | ||
Assert(partitioner->ghost_indices_initialized(), | ||
ExcMessage("You asked to create a ghosted vector, but the " | ||
"partitioner does not provide ghost indices.")); | ||
|
||
this->reinit(partitioner->locally_owned_range(), | ||
partitioner->ghost_indices(), | ||
partitioner->get_mpi_communicator()); | ||
} | ||
else | ||
{ | ||
this->reinit(partitioner->locally_owned_range(), | ||
partitioner->get_mpi_communicator()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not:
if (make_ghosted) | |
{ | |
Assert(partitioner->ghost_indices_initialized(), | |
ExcMessage("You asked to create a ghosted vector, but the " | |
"partitioner does not provide ghost indices.")); | |
this->reinit(partitioner->locally_owned_range(), | |
partitioner->ghost_indices(), | |
partitioner->get_mpi_communicator()); | |
} | |
else | |
{ | |
this->reinit(partitioner->locally_owned_range(), | |
partitioner->get_mpi_communicator()); | |
} | |
if (partitioner->ghost_indices_initialized()) | |
{ | |
this->reinit(partitioner->locally_owned_range(), | |
partitioner->ghost_indices(), | |
partitioner->get_mpi_communicator()); | |
} | |
else | |
{ | |
this->reinit(partitioner->locally_owned_range(), | |
partitioner->get_mpi_communicator()); | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the bool
parameter you can still create a non-ghosted vector out of a partitioner with ghost elements.
With your change, I would need two different partitioners: one with and one without ghost elements. That would work with me as well, but the question is whether we want that. I haven't worked much with the partitioner class: Is it common in user-code to have multiple partitioners like this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After thinking and fiddling about this for a little longer, I find that your suggestion is the way to go.The reinit function should correspond to whatever the partitioner provides. I will adjust this PR and the documentation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...well I thought so. With the new suggestion I encountered some strange behavior.
I initialize the system_rhs
and pass it to AffineConstraints::distribute_local_to_global(cell_matrix, cell_rhs, local_dof_indices, system_matrix, system_rhs)
which causes problems:
- Initialized with ghost elements, it fails when
system_rhs
is a PETSc/Trilinos Vector, and succeeds when it is a LA::distributed::Vector
An error occurred in line <1664> of file </raid/fehling/dealii/include/deal.II/lac/trilinos_vector.h> in function
void dealii::TrilinosWrappers::MPI::Vector::add(const std::vector<unsigned int>&, const std::vector<double>&)
The violated condition was:
!has_ghost_elements()
Additional information:
You are trying an operation on a vector that is only allowed if the
vector has no ghost elements, but the vector you are operating on does
have ghost elements. Specifically, vectors with ghost elements are
read-only and cannot appear in operations that write into these
vectors.
See the glossary entry on 'Ghosted vectors' for more information.
Stacktrace:
-----------
#0 dealii-9.5.0-pre/lib/libdeal_II.g.so.9.5.0-pre: dealii::TrilinosWrappers::MPI::Vector::add(std::vector<unsigned int, std::allocator<unsigned int> > const&, std::vector<double, std::allocator<double> > const&)
#1 dealii-9.5.0-pre/lib/libdeal_II.g.so.9.5.0-pre: void dealii::AffineConstraints<double>::distribute_local_to_global<dealii::TrilinosWrappers::SparseMatrix, dealii::TrilinosWrappers::MPI::Vector>(dealii::FullMatrix<double> const&, dealii::Vector<double> const&, std::vector<unsigned int, std::allocator<unsigned int> > const&, dealii::TrilinosWrappers::SparseMatrix&, dealii::TrilinosWrappers::MPI::Vector&, bool, std::integral_constant<bool, false>) const
- Initialized without ghost elements, it succeeds when
system_rhs
is a PETSc/Trilinos Vector, and fails when it is a LA::distributed::Vector
An error occurred in line <1675> of file </raid/fehling/bin/dealii-9.5.0-pre/include/deal.II/lac/la_parallel_vector.h> in function
Number& dealii::LinearAlgebra::distributed::Vector<Number, MemorySpace>::operator()(dealii::LinearAlgebra::distributed::Vector<Number, MemorySpace>::size_type) [with Number = double; MemorySpace = dealii::MemorySpace::Host; dealii::LinearAlgebra::distributed::Vector<Number, MemorySpace>::size_type = unsigned int]
The violated condition was:
partitioner->in_local_range(global_index) || partitioner->ghost_indices().is_element(global_index)
Additional information:
You tried to access element 4936 of a distributed vector, but this
element is not stored on the current processor. Note: The range of
locally owned elements is [6305,12544], and there are 0 ghost elements
that this vector can access.
A common source for this kind of problem is that you are passing a
'fully distributed' vector into a function that needs read access to
vector elements that correspond to degrees of freedom on ghost cells
(or at least to 'locally active' degrees of freedom that are not also
'locally owned'). You need to pass a vector that has these elements as
ghost entries.
Stacktrace:
-----------
#0 hprun: dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>::operator()(unsigned int)
#1 dealii-9.5.0-pre/lib/libdeal_II.g.so.9.5.0-pre: dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>::add(std::vector<unsigned int, std::allocator<unsigned int> > const&, std::vector<double, std::allocator<double> > const&)
#2 dealii-9.5.0-pre/lib/libdeal_II.g.so.9.5.0-pre: void dealii::AffineConstraints<double>::distribute_local_to_global<dealii::TrilinosWrappers::SparseMatrix, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host> >(dealii::FullMatrix<double> const&, dealii::Vector<double> const&, std::vector<unsigned int, std::allocator<unsigned int> > const&, dealii::TrilinosWrappers::SparseMatrix&, dealii::LinearAlgebra::distributed::Vector<double, dealii::MemorySpace::Host>&, bool, std::integral_constant<bool, false>) const
I'm puzzled. I feel tempted to remove the last commit in this PR as it worked. What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we indeed need to enable the user to control where to take into account the ghosts, so the current solution with the additional flag seems right to me.
I removed 1143a22 and the |
1143a22
to
0d962c8
Compare
Part of #14426.
LinearAlgebra::distributed::Vector
has the neat feature to switch between a ghosted and non-ghosted state. However, this feature is not present in theTrilinosWrappers::MPI::Vector
andPETScWrappers::MPI::Vector
classes, as they need to be either ghosted or non-ghosted from their construction onwards.I'm trying to write (matrix-based) code that works with any of these classes, but encountered issues around this discrepancy between the classes. I tried to move to using the
Utilities::MPI::Partitioner
classes to reinit the third party vectors, but those were always initialized in a ghosted state so far with the current compatibility layer (see #12864).This PR generalizes the reinit interface with a new parameter
make_ghosted
so users can decide to work in a ghosted state or not. A dummy function is also added to theLA::distributed::Vector
class.I do not know if this is the right way to go, and I would like to know what you think. I could also just drop the
Partitioner
objects for my matrix-based implementations and use the classicreinit
interfaces withIndexSets
.