New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move p:d:T::has_hanging_nodes() to DistributedTriangulationBase #13478
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK apart from some small documentation updates.
* Return true if the triangulation has hanging nodes. | ||
* | ||
* In the context of parallel distributed triangulations, every | ||
* processor stores only that part of the triangulation it locally owns. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pre-existing, but I think proper English grammar needs to spell this as
* processor stores only that part of the triangulation it locally owns. | |
* processor stores only that part of the triangulation it owns locally. |
* However, it also stores the entire coarse mesh, and to guarantee the | ||
* 2:1 relationship between cells, this may mean that there are hanging |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this needs to be modified because fullydistributed::Triangulation
does not store the whole coarse mesh. I would phrase this in a way to say that we store a coarse mesh that in general covers a larger part of the computational domain than the locally owned or ghosted cells.
source/distributed/tria_base.cc
Outdated
return 0 < Utilities::MPI::max(have_coarser_cell ? 1 : 0, | ||
this->mpi_communicator); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this gets easier to read if you move the constant part of the comparison to the right hand side,
return 0 < Utilities::MPI::max(have_coarser_cell ? 1 : 0, | |
this->mpi_communicator); | |
return Utilities::MPI::max(have_coarser_cell ? 1 : 0, | |
this->mpi_communicator) != 0; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this->mpi_communicator); | ||
} | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use one more empty line
/rebuild |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice. OK with the changes suggested by @kronbichler .
b5efb9f
to
bab3b0f
Compare
@kronbichler I have made the changes! |
... since the logic is the same as for
p:f:T
.