Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allowing separated InterfaceKernels using periodic boundaries #24140

Open
ttruster opened this issue Apr 20, 2023 · 32 comments
Open

Allowing separated InterfaceKernels using periodic boundaries #24140

ttruster opened this issue Apr 20, 2023 · 32 comments
Labels
C: Framework T: task An enhancement to the software.

Comments

@ttruster
Copy link
Contributor

Reason

Meshes written by --mesh-only that contain split blocks from BreakMeshByBlock cannot be utilized for physical analyses with InterfaceKernels in a subsequent input file which reads the mesh, due to the missing element-neighbor connectivity information. This issue impacts the ability of cohesive zone modeling for distributed meshes. Also, there is not a mechanism for inputting element-neighbor pairs other than the automatically detected pairs due to adjacency (shared nodes).

Design

Generalize Periodic Boundary Condition (PBC) input to allow weak and strong enforcement. Strong enforcement uses DOF constraint equations (current approach); weak enforcement would use surface integrals as part of the weak form of the boundary value problem.
Provide the names of the two topologically conforming (potentially separated) boundaries for weak enforcement by using the AddPeriodicBCAction, so that the pair is registered in the DOF map but does not get a constraint matrix.
Add method onPeriodicBoundary within thread loops of assembly with analogous behavior to onInterface with handles boundary ID of an element that lack an adjacent neighbor but do have a separated neighbor.
Utilize existing topological_neighbor and inverse_map methods to handle reinitialization and existing InterfaceKernels for physics.

Impact

Allows PBC class to help with solving interface problems like cohesive zone modeling and to treat periodic conforming meshes using interface kernels instead of nodal constraints, allowing Nitsche/DG/mortar/penalty treatments.

References

Originated from Discussion #22948 (reply in thread)
Requires enhancements of Libmesh from PR libMesh/libmesh#3525

@ttruster ttruster added the T: task An enhancement to the software. label Apr 20, 2023
@ttruster
Copy link
Contributor Author

Reposting some text to start the discussion.

From working on interface methods for a while, my opinion is that there are three somewhat well-delineated categories:

  1. Continuous/conforming meshes and no interface; using multi-point constraints to tie blocks for separate parts or periodic conditions
  2. Conforming or adaptively refined meshes with hanging nodes that have Penalty/Nitsche/Lagrange-multiplier enforcement of weak continuity along the interface. Here, the corner nodes on the coarse mesh are conforming everywhere, even on external boundaries.
  3. Nonconforming interfaces, mesh tying, and contact/friction modeling as well as embedded mesh methods using Penalty/Nitsche/Mortar enforcement of the interface conditions.

These 3 categories increase in generality of problem class from top to bottom while also increasing in implementation/architecture complexity and projection errors. I've implemented all 3 categories before in my matlab FEM codes and in FEAP. Within MOOSE, I see these 3 mapping onto Kernels, InterfaceKernels, and MortarConstraints respectively. So, the middle category is a compromise in difficulty and generality. The difficulty is moderate since the quadrature point location on each side of the interface is identical once an offset vector (potentially zero) is introduced; I assume that idea works even for adaptively refined meshes and the InterfaceKernels for those. This means that no extra projection or copying of stateful materials from quadrature point locations on opposite sides is needed. So, my suggested restriction is to say that non-adjacent element-neighbor concept would only be permitted for two parent element faces that geometrically conform after a constant offset vector. This would allow usage of existing InterfaceKernel classes to solve problems with "gaps" between the two surfaces, for example finite-thickness cohesive zone (CZM) interfaces in solid mechanics.

I finally got some time to work on this within the past week. The source code is here https://github.com/Truster-Research-Group/moose/tree/topological-neighbor-PBC and https://github.com/ttruster/libmesh/tree/weak-enforcement-PBC. Perhaps we can open an issue to continue the discussion of this? I had also heard that the discussion of stateful mortar constraints was continuing, so I noticed a new issue was started here for that #24030.

Some of the discussions to occur in the issue include

  • Error checks to verify which variables are periodic and that they are attached to the interface kernel
  • A test of whether any PBC exist in the model, to test at the top of onPeriodicBoundary
  • Devise a caching scheme in MOOSE (seems better than libmesh) to list the order that the PBC pairs will be integrated, so that the results from topological_neighbor can be stored instead of recomputed at each nonlinear iteration. Perhaps a sorted table based on element and side could be used, so that a binary search could be used?
  • What happens if the two sides of an "interface" live on two processors? I suppose the algebraic ghosting is not too difficult to handle with the relationship managers, but how about other data structures such as the element-neighbor pairs and their offsets?
  • How would you differentiate a standard interface kernel from the separated interface kernels? The former requires only a single boundary argument whereas the latter conceptually requires two boundaries right?
  • Do we want to strictly enforce that all meshes resulting from BreakMeshByBlock be coupled with a PBC object? And if so, then should we improve the automation of inputting PBC pairs (or rather, can BreakMeshByBlock provide a list of all the boundary ID/names that it generated?)
  • any other concerns with my crude implementation

@lindsayad
Copy link
Member

@ttruster this would resolve #21161 and #21154, wouldn't it? And then consequently make the hacks we have to do for BreakMeshByBlockGenerator in #24018 unnecessary?

@ttruster
Copy link
Contributor Author

ttruster commented Apr 21, 2023

@lindsayad by glancing over these, I believe this approach would (have a good chance to) fix all three of these.

From #21161 I agree that traditional theory of CZM is that the two material points that "talk to each other" (have force interaction) remain fixed until complete fracture/separation; this is the difference from contact mechanics with friction where the interacting points changes during sliding. The approach I'm proposing here, using the terminology of #21161, is to use the existing periodic boundaries system with a "weak-enforcement" flag to serve as a data-container or at least a means to look up the "fake neighbor" (after breaking the mesh or just from two pairs of boundaries that were never broken).

I haven't dealt with distributed meshes yet, but I would hope that by removing the "hack" and using "supported" functions from libmesh such as topological_neighbor, that the Ghosting system could then be used properly.

Whether the ex-neighbor information is stored when the mesh is broken in BreakMeshByBlock (which that MeshGenerator should probably be finalized now to call nullify neighbors or whatever the correct API is to decouple them) or if it is stored after the first pass through the InterfaceKernel assembly loop (actually, the material property initialization phase would be the most logical place probably; its the first call of the threaded loops, I assume), it would save on run-time to not have to re-compute the neighbor at each iteration since it will never change. But, we'd need a good idea for where to store that (since the neighbor pointers on Elem * are out of the question because of definition of being adjacent); instead of an unordered list of element-neighbor pairs, could it be sorted by the element-face ID to allow a binary search, or could it be stored in the same order that thread loops will pass over the element-faces?

Because that's the drawback of this: while it makes the handling of CZM more robust, it also makes it "slower". Though I have no idea what fraction of time this is compared to solving Kd=F or computing K and F; maybe it doesn't "matter"...

@ttruster
Copy link
Contributor Author

I also forgot, I'm including in here the ability of the two sides to not be physically adjacent; the two boundaries just need to be offset by a constant vector. That's why to me this fits "logically" with periodic boundary conditions and why it motivates doing something "different", i.e. not just forcing the CZM stuff to fit within the existing InterfaceKernel (IK) system.

In short, the difference is:

  1. Classical IK are for coupling two different variables along an internal boundary/interface (single boundary), such that the total solution field is discontinuously interpolated along the interface but has weak continuity or other condition imposed
  2. Proposed IK is for coupling the same variable (or potentially two different ones) along two boundaries that topologically conform, such that the total solution field is discontinuously interpolated along the interface but has weak continuity or other condition imposed

Because the "result" (second phrase) is the same for both case, it's logical to me to use an InterfaceKernel object to perform the physical integration in both cases. The weak form terms appear identical for both cases; the issue is the discrete representation of the two cases is different.

@ttruster
Copy link
Contributor Author

One more idea, from looking over those 90+ comments at #12033, and based on my list of bullets: these are a "little" more complicated to describe in the input file because the user has to "know" the list of paired boundaries to mention in the Periodic BC (PBC) object. And then they need to add an InterfaceKernel (IK) object to the "element" side boundary.

Maybe we can write a composite action that writes both the PBC and the IK. Then the boundaries are only mentioned once. The downside is, how to pass all the IK parameter values since that action needs to potentially handle all possible types of IK. So we are probably stuck with just needing to add a list of the PBC using the existing AddPeriodicBCAction.

Anyways, for my physical problems, I'll be generating my .i files using Matlab scripts from my code https://bitbucket.org/trusterresearchgroup/deiprogram which identifies the element-neighbor pairs both for adjacent and for separated interfaces in an originally contiguous periodic model.

@lindsayad
Copy link
Member

I think an action makes sense. We could leverage the existing cohesive zone action

@ttruster
Copy link
Contributor Author

I can work on modifying the existing master action to create both the Interface Kernel and the periodic boundary.

I will also try out Distributed Meshes on my own.

Are we going to enforce that, all meshes generated with the BreakMeshByBlock will need a periodic boundary in order to assign an Interface Kernel? That will mean that all current CZM input files need to be revised to include two boundary names. This means that backward compatibility is lost; but since the old way wasn't fully robust, then is that an acceptable change?

@lindsayad
Copy link
Member

lindsayad commented Apr 24, 2023

is that an acceptable change?

Yes, absolutely. Allowing the application of BreakMeshByBlockGenerator to CZM is one of my biggest regrets in MOOSE. I would do anything to have it be more robust.

@lindsayad
Copy link
Member

You will be a hero to me if I can remove the hacks in #24018

@ttruster
Copy link
Contributor Author

@lindsayad and @roystgnr: Well I have some interesting information to report about the current version of the source code with these weak periodic boundaries. I'll have to be very specific, because I will need your help to know what in the rest of the code will need to change. If you want to see the files I can push them into my branch.

  1. I verified that my current executables pass the tests in https://github.com/idaholab/moose/blob/next/test/tests/mesh/splitting/tests.
  2. I made a version of bilinear_mixed.i from the TM/CZM test cases that has an 8x4 mesh and uses named interfaces (block0-block1) and split that mesh into 2 processors. The reason for this: with the automatic mesh splitting feature, the split line goes VERTICAL, putting half the model in each .cpr file. And, since the CZM interface is HORIZONTAL, that means that the element-neighbor pairs in both distributed meshes are in the file; I verified that by plotting the mesh and looking at the node numbering (no case where neighbor and element from a pair are in different meshes; for any mesh where the pairs are separated, then I get errors when running the analysis).
  3. The model from BreakMeshByBlockGenerator runs on distributed mesh both from t=0 and from recover; however, the computed physics is wrong because the opposing side is not found so the Interface Kernel doesn't compute anything (i.e. the disp_x variable is zero for all time)
  4. The model using this proposed periodic boundary object fails with a libmesh error message "... src/mesh/mesh_base.C, line 1458 ..." using tensor-mechanics-debug and starting at t=0
  5. The model using this proposed periodic boundary object SUCCEEDS using tensor-mechanics-opt from t=0, and the computed physics is identical to the serial mesh result (from either BreakMeshByBlockGenerator or the periodic case)
  6. Also, BOTH the debug and opt versions can recover from step t=2 and go to step t=4 or whatever future time and compute the right physics.

So in summary:

  • the opt version doesn't run some error check that is run in the debug version, which permits the periodic boundary case to run and do its job. So I got lucky and was able to get a sample working tonight!
  • for topological_neighbor to work, the split meshes must contain the element and neighbor pair. My guess is, for the cases with more splits and the discontinuous meshes from BreakMeshByBlockGenerator, then the pairs are not ghosted properly so they don't get listed in the split files although they are needed
  • My hypothesis: The reason the ghosts are not written is because the AddPeriodicBCAction is called in MOOSE not in the mesh phase but in the analysis phase, so the splitting algorithm doesn't notice the required ghosting that the periodic boundary would be requesting.

Thoughts on how to get those ghosts written out?

@ttruster
Copy link
Contributor Author

Before I go to bed, to un-informed ideas to get the ghosts written

  1. have BreakMeshByBlockGenerator do something that makes a ghost listing for whatever it breaks up
  2. store the periodic boundary data somewhere in the MOOSE mesh object instead, so that the ghost dependencies needed for them would be registered

The second case makes sense to me, as right now I'm never checking that the periodic boundary condition is actually assigned to the variable that the Interface Kernel is acting on, I'm only using it to look up boundary pairs.
Also, I assume mesh splitting uses Metis or a similar library. Which might want to know which element-neighbors need ghosting at best to create a better partitioning or at worst to make sure all the necessary pairs are written to the separated files.

@roystgnr
Copy link
Contributor

I've forgotten - what is MOOSE currently doing with PeriodicBoundary objects on a DistributedMesh? You're exactly right about the problem - DistributedMesh only knows how to delete remote elements and repartition/redistribute already-properly-ghosted elements; it doesn't yet know how to get improperly-deleted elements back if the required ghosting level is increased after the fact by something like the post facto addition of a new periodic boundary). But IIRC we hit this issue years ago, and although MOOSE was unable to make the obvious "add all periodic BCs before prepare_for_use()" fix, IIRC there was some workaround for it ... what was that workaround and why would it not be working here too?

@lindsayad
Copy link
Member

When there are "late" geometric ghosting functors to be added like in the case of periodic boundaries, we tell the mesh to not delete any remote elements until the periodic boundaries have been added

@lindsayad
Copy link
Member

So basically we keep the mesh serialized ... (although we probably don't do any checks for a pre-split mesh ...)

@roystgnr
Copy link
Contributor

That works (from a correctness standpoint, if not an efficiency one). So we'd just need to add another special case for this type of PeriodicBoundary adder?

@lindsayad
Copy link
Member

Yea although wouldn't this also use the AddPeriodicBCAction?

@ttruster
Copy link
Contributor Author

I quickly ran a test on the test/tests/bcs/periodic/periodic_bc_test.i file mentioned at https://mooseframework.inl.gov/source/actions/AddPeriodicBCAction.html, and with a split into two parts I get the same error message that "Periodic boundary neighbor not found", which is the same error from my tests of the interface kernels. So it appears that the existing periodic BC tests don't work with automatic mesh splitting either.

Would storing the periodic_boundaries data or a warehouse of them in the mesh, rather than the DOF-Map, allow for neighbors to be checked when the mesh is split? Like how a list of boundary objects is given in the mesh, but only some of them are assigned interface kernels or bcs or constraints? Similarly, you could have a list of periodic boundaries in the mesh, and only in the analysis then decide to assign a variable to them (strong enforcement) or assign an interface kernel to them (weak enforcement).

@roystgnr
Copy link
Contributor

So it appears that the existing periodic BC tests don't work with automatic mesh splitting either.

You mean when the mesh is read back in from a CheckpointIO file?

I don't see anything disabling distributed or disabling restart testing on those periodic BC regression tests, and at least some of them are running transient and are large enough not to pull in periodic neighbors "by accident", so I'd expect to be failing exodiffs in the "Distributed mesh and recover" recipe if we had a problem there. Looking at checkpoint_io.C I see we're just writing all semilocal elements (which would include all periodic neighbors) in the distributed mesh case, and we're writing everything the ghosting functors tell us we need (which should also include all periodic neighbors) in the serialized/Replicated mesh case. And then at read time I wouldn't expect to need anything except for the same "disable deleting remote elements until we've definitely redefined 'remote' correctly" workaround.

@lindsayad
Copy link
Member

Yea some clarification on what you mean by "automatic mesh splitting" would be helpful

@ttruster
Copy link
Contributor Author

Using my "next" branch of compiled MOOSE with the test case above that I mention:

(moose_docs) ttruster@designstar: /_Prog_Repo/MOOSE/mooseCLMI/test/tests/bcs/periodic$ ../../../moose_test-opt -i periodic_bc_test.i --split-mesh 2 --split-file two Mesh/parallel_type=replicated
(moose_docs) ttruster@designstar: /_Prog_Repo/MOOSE/mooseCLMI/test/tests/bcs/periodic$ mpirun -np 2 ../../../moose_test-opt -i periodic_bc_test.i --use-split --split-file two

Then I receive the error message is:
*** ERROR ***
Periodic boundary neighbor not found
[0] ../src/base/periodic_boundaries.C, line 109, compiled Apr 20 2023 at 23:00:33

I receive the same message when calling moose-test-debug.

I don't see any tests in that folder "bcs/periodic" using distributed meshes, and I don't see any files in the "mesh/splitting" folder that use PBC. Is there another place to look?

@lindsayad
Copy link
Member

Yea this is the case I referred to in #24140 (comment). If you load a pre-split mesh, which did not have any ghosting functors from periodic bcs attached to it at the time that the mesh was prepared before writing out, then we don't do any communication to recover the elements that are needed. If you do single step with --distributed-mesh then everything works (and this is why we can do distributed recover in CIVET)

@ttruster
Copy link
Contributor Author

Okay, so I reran both periodic_bc_test.i and my test cases without doing splitting and just adding --distributed-mesh at the end of mpirun (note, I couldn't find that option anywhere on the MOOSE website documentation except on https://mooseframework.inl.gov/moose/application_usage/command_line_usage.html. It probably belongs on https://mooseframework.inl.gov/syntax/Mesh/index.html#replicated-and-distributed-mesh.)

They both run with opt version and can also be recovered.

So if we are not worried about enabling pre-split meshes and periodic BC together right now, then the current approach seems to pass tests. Namely we have addressed my 4th bullet at the top "What happens if the two sides of an "interface" live on two processors?"

For debug mode, the error message comes from Libmesh in mesh/mesh_base from the line
libmesh_assert(!Threads::in_threads);
I am guessing that happens from my call to
*(_mesh.getMesh().sub_point_locator())
Is there another safer way to get a point locator?

@ttruster
Copy link
Contributor Author

What is the difference if I instead use _point_locator(_fe_problem.mesh().getPointLocator()?
I compiled the code with that option instead, but the error message is still the same.

@hugary1995
Copy link
Contributor

I wasn't aware of the --distributed-mesh option either. Does that option automatically split and distribute the mesh according to the number of available processors? If so, then I guess that means we can use CZM with distributed mesh using Tim's fix, and of course we can happily accept the overhead from re-splitting the mesh every run.

@lindsayad
Copy link
Member

@hugary1995 you didn't know about --distributed-mesh?? That surprises me. --distributed-mesh makes it so that the mesh class is DistributedMesh as opposed to ReplicatedMesh. And delete_remote_elements() which is called from prepare_for_use() actually deletes elements (unless we've asked that elements not be deleted). In practice when you run with periodic BCs and you've asked for --distributed-mesh the mesh will not be "distributed" (remote elements deleted) until right before we run FEProblemBase::init which is about the last task before we start executing the executioner. So you will have serial mesh memory overhead until that point (but not at the point that you allocate all your GMRES vectors etc.)

@hugary1995
Copy link
Contributor

I've been presplitting my large meshes since my day 1 of moose :). As Tim said, we probably should mention the existence of that option in the split mesh doco page.

Anyways,

And delete_remote_elements() which is called from prepare_for_use() actually deletes elements

is exactly what I need to know. So yeah, I'm pretty sure for all of our problems, there will be enough memory to hold the serial mesh prior to solving the system.

@ttruster
Copy link
Contributor Author

So our semester is finally ending, and I was also working on interface kernels with coupled scalar variables with @ambehnam. That got me ready to post a closure to this issue. My one-line request that I'm seeking an answer for is at the bottom.

With the question of distributed vs serialized meshes "addressed", I have collected the comments above and made a list of a dozen tasks to do for this development, and I am ready to open it as a pull request except to settle on one point I raised above:

Holding the data for the weak-enforced PBC within the dof-map as it currently is, versus defining some new container in Moose-mesh or libmesh called "paired-boundaries" that would be a pair of pointers to boundaries and a cached list of the topological neighbors.

The benefits of having a paired-boundaries container handled up in the MeshGenerator portion of the code are that:

  • Paired boundaries can be defined at mesh generation time and used for various purposes at analysis time (e.g. passing the pair to the AddPeriodicBCAction or attaching it to an interface kernel or doing something else)
  • The caching of element-neighbor pairs would logically happen at this stage
  • During the BreakMeshByBlock call, it could generate all the paired-boundary objects that it is splitting
  • This increases the chances of allowing CZM interfaces as well as PBC to be functional on pre-split distributed meshes by having the ghost information

However, the drawbacks are:

  • I have no idea how much effort/time this would take to create and "stabilize or make robust" such a container in both serial and parallel environments across the large array of MOOSE apps in existence
  • I don't have much time on my own for learning how to add those connections
  • Currently, I don't thing Exodus files can store a paired-boundary object, so it would need to be listed in the input file if the mesh is reloaded.

So if we agree that the drawbacks outweigh the benefits of adding a paired-boundary container, then I'll move forward with opening a pull-request to begin developments within the AddPeriodicBCAction and weak-enforcement approach.

@lindsayad
Copy link
Member

@roystgnr is more qualified to give his opinions on the "vs DofMap" piece.

  • Currently, I don't thing Exodus files can store a paired-boundary object, so it would need to be listed in the input file if the mesh is reloaded

We would handle this using MOOSE restartable data if implemented at the MOOSE level

@roystgnr
Copy link
Contributor

I don't honesty see the drawbacks of keeping the pairing connections in DofMap. In DofMap, MOOSE can access it because MOOSE knows about libMesh, so MOOSE can do anything. In MOOSE, libMesh can't access it because libMesh doesn't know about MOOSE, so better make sure you also defined a custom GhostingFunctor if you want to be able to repartition a distributed mesh without losing connected elements.

@lindsayad
Copy link
Member

I'm all for putting this as far upstream as possible 👍

@ttruster
Copy link
Contributor Author

ttruster commented May 16, 2023

Sounds good, we will keep with the DofMap approach as planned.

This is my first time doing this:
Since this requires Libmesh changes in PR libMesh/libmesh#3525, so can I open a MOOSE PR for this yet or does that dependent PR need to be closed and merged, and that change passed down into MOOSE submodule? (which it is not ready to be closed yet, I have issues to fix)

As a related question, I had previously been able to change .gitsubmodules to point to my repo and execute update_and_rebuild_libmesh.sh --skip-submodule-update to avoid grabbing the framework standard one. However, the recent time I tried this, I wasn't successful and lost my changes. So I have manually changed the source and include files in the libmesh directory of MOOSE. Is there a "proper" way to declare that different linkage to the submodule, and should that be done within the opened MOOSE PR so that the test engines will all run?

@lindsayad
Copy link
Member

It's easiest to get the libmesh PR merged first. Alternatively if you really want to see MOOSE testing before the libMesh PR is merged, @roystgnr or I could potentially push your fork's branch to libmesh/libmesh and then you could have that libmesh update hash in your MOOSE PR. It's possible you could keep it in your own fork and update .gitmodules in in your MOOSE PR, but I'd have to check and see what we do in our civet recipes.

update_and_rebuild_libmesh.sh --skip-submodule-update should work ... maybe there was a typo? Usually when I'm developing in libMesh for MOOSE, I just

cd libmesh/build
make -j128
make install

and then carry on in MOOSE without running the full-blown update_and_rebuild script

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C: Framework T: task An enhancement to the software.
Projects
None yet
Development

No branches or pull requests

4 participants