Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move p:d:T::has_hanging_nodes() to DistributedTriangulationBase #13478

Merged
merged 1 commit into from Mar 6, 2022

Conversation

peterrum
Copy link
Member

@peterrum peterrum commented Mar 2, 2022

... since the logic is the same as for p:f:T.

Copy link
Member

@kronbichler kronbichler left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OK apart from some small documentation updates.

* Return true if the triangulation has hanging nodes.
*
* In the context of parallel distributed triangulations, every
* processor stores only that part of the triangulation it locally owns.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pre-existing, but I think proper English grammar needs to spell this as

Suggested change
* processor stores only that part of the triangulation it locally owns.
* processor stores only that part of the triangulation it owns locally.

Comment on lines 472 to 473
* However, it also stores the entire coarse mesh, and to guarantee the
* 2:1 relationship between cells, this may mean that there are hanging
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to be modified because fullydistributed::Triangulation does not store the whole coarse mesh. I would phrase this in a way to say that we store a coarse mesh that in general covers a larger part of the computational domain than the locally owned or ghosted cells.

Comment on lines 748 to 749
return 0 < Utilities::MPI::max(have_coarser_cell ? 1 : 0,
this->mpi_communicator);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this gets easier to read if you move the constant part of the comparison to the right hand side,

Suggested change
return 0 < Utilities::MPI::max(have_coarser_cell ? 1 : 0,
this->mpi_communicator);
return Utilities::MPI::max(have_coarser_cell ? 1 : 0,
this->mpi_communicator) != 0;

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this->mpi_communicator);
}


Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use one more empty line

@kronbichler
Copy link
Member

/rebuild

Copy link
Member

@bangerth bangerth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice. OK with the changes suggested by @kronbichler .

@peterrum
Copy link
Member Author

peterrum commented Mar 6, 2022

@kronbichler I have made the changes!

@bangerth bangerth merged commit be5704d into dealii:master Mar 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants