Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Coupling bulk and surface processes #10037

Open
bangerth opened this issue May 5, 2020 · 23 comments
Open

Coupling bulk and surface processes #10037

bangerth opened this issue May 5, 2020 · 23 comments

Comments

@bangerth
Copy link
Member

bangerth commented May 5, 2020

On my long-term to do list (as part of the Integrated Earth project) is to implement algorithms and data structures to allow coupling bulk and surface processes. For sequential computations, this shouldn't be too difficult, but for parallel computations the fact that surface and bulk meshes are independently partitioned is going to lead to some headache when one needs to exchange the relevant data from one mesh to another.

This issue serves as a catch-all for patches related to this project.

@bangerth
Copy link
Member Author

bangerth commented May 5, 2020

@starki0815 mentioned in #10036 (comment) that he's been doing something like this already. @starki0815 -- can you explain what you had to implement to make this work?

@sebastian-stark
Copy link
Contributor

Well, then we have a common interest :)

Last year I have written a library, which automates most of the assembly work for standard Galerkin based finite element formulations based on deal.II (basically, it allows to define the weak form and then assembles automatically). One of the main goals was to allow for different domain portions with different physics and, also, interfaces associated with unknowns living there and coupling to the domain unknowns (surfaces/boundaries are considered as special case of interfaces). If you are interested, the basic theoretical ideas are compiled here. So, I have implemented this type of coupling, both in sequential and in distributed parallel in a reasonably general way.

My basic approach is to have separate interface and domain meshes, with the coarse interface mesh being defined based on faces of coarse cells of the domain mesh. Both meshes are linked together by a "TriangulationSystem" (which essentially keeps track of how interface cells relate to domain cell faces). In parallel, the problem is that the interface mesh partitioning must somehow be consistent with the domain partitioning because otherwise the assembly of the interface terms would be extremely complicated (and require a lot of communication). For this, I basically use a DistributedTriangulationBase, which I partition manually based on the partitioning of the domain mesh. The bottom line is that upon any refinement of the domain mesh, the triangulation must be rebuilt; and solution transfer can only happen through the domain cells underlying the interface cells (that's a part I haven't implemented yet, but which should be possible). For DoF handling I have a "DoFHandlerSystem" which combines a domain and a interface dof handler (in my case the hp versions). The details of all this are documented in the doxygen generated documentation in the doc folder in the repository https://github.com/starki0815/GalerkinTools.

The following might however be more elegant: (1) have a TriangulationSystem which only keeps a domain triangulation and has data structures defining interfaces and boundaries in terms of the cell faces (one could also think of having line structures defined in terms of cell edges in 3d as in some applications there are integrals over lines in the weak form). (2) have a DoFHandler, which allows for distribution of dofs not only on cells, but which can also distribute dofs on a collection of domain cell faces.

I am planning to develop this further (possibly, as part of a funded project) - so I'd be happy to cooperate on this topic.

@bangerth
Copy link
Member Author

bangerth commented May 5, 2020

Nice, thanks for providing some links already! This will have to wait until after my semester is over, but then I'll take a look and we collaborate on it!

@jppelteret
Copy link
Member

Cool stuff!

@dpeschka
Copy link

@luca-heltai and myself work on a project, where we solve a free boundary problem using an energetic-variational approach, where some dofs couple to dofs soley defined on the boundary. While the implementation of the mixed FE problem relies on GridGenerator::extract_boundary_mesh, we still needed to define the mapping of dofs and some aspects of the quadrature "by hand". Having this included into/provided by deal.ii might be useful to facilitate bulk-interface coupling. I guess the parallelization issue is more involved.

In any case, I'm happy to contribute (I guess Luca as well :))

@bangerth
Copy link
Member Author

@jperryhouts is also working on this general issue. Just making sure he's also tied into this discussion.

@marcfehling
Copy link
Member

I would like to present some ideas we had in mind to implement this feature as general as possible. Since this will require many separate functionalities, I think the best approach is to introduce some kind of namespace VolumeToSurface that holds the whole functionality, which may also be part of the GridTools namespace.

So far, we think that this namespace needs to contain the following:

  • A function extract_boundary_mesh similar to the current GridTools::extract_surface_mesh function. This function will create a surface triangulation based on boundary faces of the coarse volume mesh. The refinement level structure from the volume mesh will be applied on the surface mesh. Both volume and surface can be of different Triangulation types, i.e., a p::d::Triangulation as a volume mesh and a p::s::Triangulation as a surface mesh. The return type will be a mapping between volume and surface cells, see next list item.
  • A CellMap class that allows to find the surface cell pendant of a volume face and vice versa. For this, we need two std::map objects to allow some kind of bidirectional map from locally owned cells to their pendants which do not need to live on the same processor. In case one of the meshes is going to be refined and/or repartitioned, this CellMap should be automatically updated, ideally by attaching corresponding member functions to pre/post-refinement signals for both volume and surface triangulation objects. For this reason, we would like to use CellId objects as cell identifiers from which we can easily construct children or parent CellId objects without knowing their current owner. This CellMap will become handy once data needs to be transferred between both meshes, but for this we will also need to store the owning subdomain of such a cell (see last list item). The internal std::map structures should look like this:
                              locally owned       locally_owned, ghost, artifical
                              -------------       -------------------------------

surface_to_volume:           surface_CellId  -->  volume_CellId, face_id, subdomain_id

volume_to_surface:   volume_CellId, face_id  -->  surface_CellId, subdomain_id
  • A function transfer_refinement_flags that allows to transfer the refinement indicators from the surface of the volume mesh to the surface mesh, and/or vice versa. This is necessary to maintain the 1:1 mapping of cells between the two triangulation types. For this, we still need to find a way to ensure that refinement will actually happen in the same way on both meshes, i.e., after calling prepare_for_coarsening_and_refinement. Further for p::d::Triangulation objects, it might happen that p4est overwrites some refinement flags set in deal.II, see Discrepancies in refinement between p4est and Triangulation #8647
  • Functions transfer_data_from_volume_to_surface_mesh and vice versa to actually transfer any kind of cell-wise data stored in a container between two meshes. It will take a CellMap object to determine the involved cells, as well as sending and receiving processors.

Further, we need a more general functionality that we think would be best suited to be either part of the GridTools namespace or a Triangulation class member (discussion in #11366):

  • Function get_subdomain_ids that allows to identify the owners of cells represented by CellId objects

These should be all tools necessary to implement this feature for the codim = 1 case. One could also think of further generalizing this approach for other codim cases, but I think for most applications this should be sufficient.

Do you agree to our approach? Do you have any further ideas and suggestions? Any kind of feedback is welcome :)

@tjhei
Copy link
Member

tjhei commented Dec 22, 2020

Do you agree to our approach?

Sounds sensible so far.

Do you have any further ideas and suggestions?

This functionality might also be very useful to do volume to volume coupling for problems like fluid-structure interaction, where you solve different PDEs on two domains attached to each other. I thought a neat way to implement this is to define a (p::shared) "interface" Triangulation between the two distributed volume meshes to compute the coupling terms. Alternatively, one would couple the two volume meshes directly.
I think the first option would be doable with your approach if I can hook up two Triangulations to your "surface mesh".

Not sure if this is useful feedback at this point but it might be worth keeping this in mind when designing the functionality.

@marcfehling
Copy link
Member

marcfehling commented Dec 22, 2020

I forgot to mention in my previous post that we also need to somehow store how faces are oriented to each other.


This functionality might also be very useful to do volume to volume coupling for problems like fluid-structure interaction, where you solve different PDEs on two domains attached to each other. I thought a neat way to implement this is to define a (p::shared) "interface" Triangulation between the two distributed volume meshes to compute the coupling terms. Alternatively, one would couple the two volume meshes directly.

volume-to-volume and volume-to-surface coupling is conceptually similar since we have a 1-codimensional interface in both cases. So in the former case we would map (volume1_CellId, face_id) --> (volume2_CellId, face_id) instead.

In addition to a extract_boundary_mesh function for volume-surface coupling in which we create the surface triangulation from scratch, we could add a function intersect_meshes that checks if the coarse grid of two meshes has physically identical boundary faces (by checking vertex coordinates) and create a similar mapping between the corresponding cells.

I need to think if it's possible to generalize the CellMap class and its interface for both cases, or if we have to introduce a separate class for this.

Thank you for sharing your idea. I'll keep that in mind!

@peterrum
Copy link
Member

@marcfehling Thank you for outlining your (long-term) goals! It helps to put your additions in context.

Let me begin with summarizing and rephrasing your goals:

  1. Given a volume mesh and a surface mesh, you would like to keep them consistent during refinement/coarsening.

  2. After refinement, you would like to exchange data between the meshes (to me it is not completely clear what kind of data you would like to exchange but I assume data at some quadrature points or dofs - but the difference is not relevant, since they only differ in the need for an interpolation or not).

These are two distinct task which can be handled separately (or maybe with the same implementation as outlined below).

As @kronbichler mentioned, we have been working on an FSI code (non-matching, both solid and fluid mesh are of type p:d:T), in which we worked on point 2. I never had the chance and time to integrate our implementation in deal.II due to other duties (simplex), but, nevertheless we would be highly interested in a flexible clean solution.

In our case, we need to apply some forces on the structure due to the pressure in the fluid, for which we needed to evaluate the pressure in the fluid at points given by the structure. For this purpose, we take the following steps:

  • collect the quadrature points on the surface of the structure mesh

  • determine which process owns the point (a first guess of the owner can be obtained via bounding boxes as done in step-70; in a second step we refine the information via p2p communication with the help of ConsensusAlgorithms) and setup the communication pattern

  • evaluate a function (the pressure in our case) and send the data back to the requesting process

You can imagine that the interface looks like this (in our case, we store the communication pattern, since it does not change in our case):

template <int dim, int spacedim, typename T>
std::vector<T>
evaluate_function_on_quadrature_points_of_triangulation(
  const Triangulation<dim, spacedim> &             tria,
  const std::vector<Point<spacedim>> &             points,
  const std::function<T(const Point<spacedim> &)> &fu);

In principal, this approach should work for volume-volume coupling, volume-surface coupling, non-matching grids, and for arbitrary refinements. You could even have different communicators for each domain.

In my opinion, it is possible to reduce parts of point 1) to point 2) without the need for handling with cell-ids, if you image that the evaluation points in this context are the center of the surface cells. The following pseudo code shows what I mean:

#include <deal.II/base/mpi.h>

#include <deal.II/distributed/tria.h>

#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_tools.h>

using namespace dealii;

namespace dealii
{
  namespace GridGenerator
  {
    template <int dim, int spacedim>
    void
    extract_surface_mesh(const Triangulation<dim, spacedim> &tria_volume,
                         Triangulation<dim - 1, spacedim> &  tria_surface)
    {
      AssertThrow(false, ExcNotImplemented());
      (void)tria_volume;
      (void)tria_surface;
    }
  } // namespace GridGenerator
} // namespace dealii


template <int dim, int spacedim, typename T>
std::vector<T>
evaluate_function_on_quadrature_points_of_triangulation(
  const Triangulation<dim, spacedim> &             tria,
  const std::vector<Point<spacedim>> &             points,
  const std::function<T(const Point<spacedim> &)> &fu)
{
  // This function evaluates the function @p fu at arbitrary - even non local -
  // @p points on the triangulation @p tria. To determine the owning processes
  // of the points, bounding boxes and the consensus algorithms can be used.

  AssertThrow(false, ExcNotImplemented());
  (void)tria;
  (void)points;
  (void)result;
  (void)fu;

  return std::vector<T>();
}

template <int dim, int spacedim>
void
copy_refinement_flags(const Triangulation<dim, spacedim> &tria_volume,
                      Triangulation<dim - 1, spacedim> &  tria_surface)
{
  // collect center of cells of surface mesh
  std::vector<Point<spacedim>> evaluation_points;
  for (const auto &cell : tria_surface.active_cell_iterators())
    if (cell->is_locally_owned())
      evaluation_points.push_back(cell->center(true));

  // determine if the cell of the volume mesh has been marked for refinement,
  // coarsening, or none of that
  std::vector<unsigned int> flags =
    evaluate_function_on_quadrature_points_of_triangulation<dim,
                                                            spacedim,
                                                            unsigned int>(
      tria_volume,
      evaluation_points,
      [&](const Point<spacedim> &p) -> unsigned int {
        const auto cell =
          GridTools::find_active_cell_around_point(tria_volume, p);

        if (cell->refine_flag_set())
          return 1;

        if (cell->coarsen_flag_set())
          return 2;

        return 0;
      });

  // use the flags to set the refinement/coarsening flags of the surface mesh
  auto flags_iterator = flags.begin();
  for (const auto &cell : tria_surface.active_cell_iterators())
    if (cell->is_locally_owned())
      switch (*(flags_iterator++))
        {
          case 0:
            break;
          case 1:
            cell->set_refine_flag();
            break;
          case 2:
            cell->set_coarsen_flag();
            break;
          default:
            AssertThrow(false, ExcNotImplemented());
        }
}

template <int dim, int spacedim>
void
mark_cells_for_refinement(const Triangulation<dim, spacedim> &tria)
{
  AssertThrow(false, ExcNotImplemented());
  (void)tria;
}

template <int dim, int spacedim>
void
test(const MPI_Comm &comm)
{
  // create coarse grid of volume mesh
  parallel::distributed::Triangulation<dim, spacedim> tria_volume(comm);
  GridGenerator::hyper_ball(tria_volume);

  // extract surface of coarse grid of volume mesh
  parallel::distributed::Triangulation<dim - 1, spacedim> tria_surface(comm);
  GridGenerator::extract_surface_mesh(tria_volume, tria_surface);

  while (true)
    {
      // prepare coarse mesh for refinement
      mark_cells_for_refinement(tria_volume);
      tria_volume.prepare_coarsening_and_refinement();

      // copy flags to surface mesh
      copy_refinement_flags(tria_volume, tria_surface);

      // refine volume and surface mesh
      tria_volume.execute_coarsening_and_refinement();
      tria_surface.execute_coarsening_and_refinement();
    }
}

int
main(int argc, char **argv)
{
  Utilities::MPI::MPI_InitFinalize mpi(argc, argv, 1);

  test<2, 2>(MPI_COMM_WORLD);
}

Obviously, this code is not tested (I also skipped some details like mapping or using a cache during find_active_cell_around_point) and I am not sure if it handles mesh smoothing correctly, but nevertheless, it might provide a general approach to tackle your problem without the need to write too much code.

Let's find a solution that is general enough for different coupling approaches!

@bangerth
Copy link
Member Author

bangerth commented Dec 22, 2020 via email

@bangerth
Copy link
Member Author

@peterrum -- I understand that one can do what @marcfehling wants to do by just searching for points (though searching for points on 2d surfaces in 3d space is difficult -- the points tend to not actually lie on the surface). But I don't understand your objections: Surely it is more efficient if we can build the map between volume and surface cells once and then never have to search again. In fact, once the map is built, the rest of all of the algorithms can be done using P2P communication.

So can I ask again what specifically the objection to a function

  virtual
  std::vector<types::subdomain_id>
  Triangulation::get_owning_processor (const std::vector<CellId> &cells) const;

actually is? You don't have to use the function yourself, but it clearly points to a very efficient implementation of the things @marcfehling and I want to come up with when coupling volume and surface meshes. I don't disagree that there are other use cases (non-matching meshes, volume-to-volume coupling, etc) for which other algorithms would be better suited, but for our case I think that relying on P2P communication and pre-computed communication patterns is clearly the way to go...

@sebastian-stark
Copy link
Contributor

@marcfehling
As a first step to built on this seems reasonable to me. The implementation I mentioned above is conceptually quite similar, but is not as general in that it only allows for the case that the partitioning of the surface mesh is consistent with that of the underlying domain cells. If I understand you correctly, you intend to allow e.g. for a p::d::Triangulation for both, the surface and the domain mesh - this would of course be great, although I fear that the algorithms are going to be quite complex.

@jperryhouts
Copy link
Contributor

@sebastian-stark I started working on something similar back in August, and this didn't seem to be as difficult as I expected. If I remember correctly, the Utilities::MPI::some_to_some() function does exactly that -- transfer data from/to MPI processes based on a map object.

@sebastian-stark
Copy link
Contributor

@jperryhouts
I thought more about the difficulty to efficiently figure out who has to communicate with whom for large problems, rather than doing the actual communication. But thinking about it again now, this doesn't seem to be too difficult either. Probably, there just wasn't enough pressure on me to make it work at the time when I implemented it, so I chose the simpler approach :)

@sebastian-stark
Copy link
Contributor

Trying to recall the issues and questions I came across when implementing something like that, the following things also came into my mind:

(1) It would be nice to make this work as well for internal interfaces between two domains. In this case, a difficulty is that two volume cells are adjacent to each interface cell, and these volume cells may be refined differently. What I am doing in my implementation is the following: (a) I generally make sure that the refinement of the interface cells follows that of the finer volume cells, (b) the interface mesh is generally defined based on faces of coarse volume cells, and this implicitly defines a "minus" and a "plus" side of the interface together with a direction of the interface normal, (c) there is a container of objects of type InterfaceCellDomainCells, with each object storing an interface cell, the adjacent volume cells, the refinement case at the interface, and the relevant subface_id in case the neighboring volume cells are refined differently (see here and here); this container essentially corresponds to the map surface_CellId --> volume_CellId, face_id, subdomain_id suggested by @marcfehling. To stay as close as possible to the approach of @marcfehling, one could generally follow the convention that the volume cell linked to an interface cell is always the one refined to the same degree as the interface cell. The information about the neighbor can then be figured out with the neighbor(), neighbor_of_neighbor(), and neighbor_of_coarser_neighbor() functions without having to store them in the map. One thing needs, however, to be kept track of in the map I guess: one needs to know on which side ("minus" or "plus") of the interface cell the volume cell is located. I.e., the map would look like interface_CellId --> volume_CellId, face_id, subdomain_id, interface_side. The inverse map might then look like volume_CellId, face_id --> interface_CellId, subdomain_id, interface_side. Another option would be to hold all information about the neighborship relation in the maps: Have a map interface_CellId, interface_side --> volume_CellId, face_id, subface_id, subdomain_id and the inverse map volume_CellId, face_id, subface_id --> interface_CellId, interface_side, subdomain_id, with both maps involving all the cells adjacent to the interface and not just those, which are refined more.

(2) I personally never came across the need to explicitly store the inverse map between volume cells and interface cells. @marcfehling I'm curious, whether you have something particular in mind already what you would like to do with this map?

(3) I had a hard time thinking about how face (or rather cell) orientation factors into all this. What I finally did is the following: Extract the interface mesh without doing any vertex flips (extract_boundary_mesh() in contrast flips vertices to get consistent normal vectors). This produces an interface mesh with potentially inconsistent interface normals between neighboring interface cells, but the advantage is that face and interface cell quadrature points remain consistent as well as the child numbering on faces and interface cells. The inconsistent normals can be easily fixed by determining the interface normal not from the interface cell, but rather from the underlying volume cell on the "minus" side. The latter may also be used to properly set the cell_orientation flag in order to also get consistent normals for the interface mesh (although I don't know whether this in turn interferes with the arrangement of quadrature points and child indexing). This and the discussion are related: #7474 (comment). Line orientation may also be an issue for some cases: #7474 (comment).

@sebastian-stark
Copy link
Contributor

sebastian-stark commented Dec 23, 2020

... and I forgot (4): I think, somehow one has to make sure that all the information (dofs, etc.) related to a volume cell (volume cells) adjacent to a surface (interface) cell is actually available on the processor owning the surface (interface) cell, and possibly vice versa. The most obvious way to ensure this is to appropriately adjust the assignment of ghost cells. Would p4est allow for an explicit specification of additional ghost cells?

@peterrum
Copy link
Member

Dear @bangerth,

some comments to begin with:

  • I hope you know your statement "The issue of coupling two volume meshes is trivial if their cells have matching faces" simplifies the issue quite a bit and makes assumption which are not fulfilled in the normal case.
  • "So can I ask again what specifically the objection to a function" I would like to point out that I am not the only one who is not in favor in having this function in the (serial) Triangulation class. Besides that the notion "owning process" does not fit into a serial triangulation, I am generally reluctant to add new functions to the Triangulation classes. As is it is now, the Triangulation classes are the most complex part of the library (hardly maintainable) and the files are considered by many as "don't touch it files" (just ask around). Our goal should be to simplify the classes and make dependencies more clear than they are now.
  • "You don't have to use the function yourself, but": I hope you know that I am not reviewing and approving/disapproving PRs depending if I would like to use them or not.

Anyhow, back to the topic of coupling solutions living on different meshes. In my statement above, I have described an approach which works well in our case for FSI (to make matters transparent, I have made our development repo public: https://github.com/peterrum/nonmatching-communication - it is not very clean, but it was the best I could come up in two days within a couple of hours in May). As far as I see it the approach might be extendable to many more use cases, e.g., to the coupling of bulk and surface meshes. Maybe I am wrong and maybe some details are missing but I would love to see a unified coupling approach, which fulfills the need of many different applications, instead of a ad-hoc implementation (only targeting a single use case).

Of course this would mean some discussions (since we all have - slightly - different requirements) and work on our side and your side (to get everything under the hood): but I think a good coupling infrastructure in deal.II would be extremely useful and is definitely a feature that is missing in deal.II. To get back to your comment "You don't have to use the function yourself": this would be a feature we would want to use (and probably many of our users). Off the top of my head, I can come up with at least three use cases where I would like to use it: 1) in our FSI code, 2) to transfer solutions between independently refined meshes (on one mesh we might solve the Navier-Stokes equations on the other the level set equations, which only has to be refined close to the interface - as we would like to do in adaflo and MeltPoolDG), and 3) clean up step-70.

@kronbichler
Copy link
Member

kronbichler commented Dec 24, 2020

My objection to get_owning_processor(std::vector<CellId>&) would be that it only seems to address a rather narrow problem, which does not warrant to extend the big Triangulation class. Sure, the version proposed in #11366 seems very efficient, likely more efficient than the alternative @peterrum and I were thinking about in terms of the absolute run time. What we are trying to say is that it probably won't matter much when using a more general approach without p4est specifics that could reside in GridTools, compared to all other steps one does in a finite element program (we're talking something taking 0.1 seconds instead of 0.05 seconds, compared to something else taking 100s of seconds). More importantly, I do not yet see yet whether this function or the approach described in #10037 (comment) is extensible to the non-matching interface case or even with fully distributed Triangulation where I am not sure the identification via the cell id would work for a surface on the one hand and volume on the other.

I am going to admit that I am likely biased by the problems that we have solved already, which might not cover what others are interested in. So let me try to get some clarity on the various requirements, and ask the others to add on that concept:

  • The coupling approach described by @peterrum above targets the so-called partitioned coupling approach where one exchanges information along some right hand side, i.e., residual. For finite element algorithms, this ultimately boils down to evaluate the coupling terms at some quadrature points on all locally owned cells. Let us assume field 1 wants to compute something, i.e., multiply its test functions with a contribution coming from field 2. What the approach outlined by @peterrum does is to (a) figure out for each point which remote processes in the partition of field 2 have the point in the locally owned subdomain, (b) let the remote process find the cell and reference coordinates on the cell for that particular point, (c) evaluate the solution field 2 using the reference coordinates (which could eventually use FEPointEvaluation from FEPointEvaluation: Evaluating shape functions on arbitrary points of a cell #11046 when you aggregate all points on the same cell), and (d) send the result of the evaluation back to the requester from field 1. These steps (a)-(d) are of course generic, so one can also switch fields 1 and 2 and follow the same procedure. The important part is that the processor working at the boundary of field 1 does not need to know the solution vector of field 2 directly and its distribution of cells. It can get the information as a packed data of point information. One can remember the result of steps (a) and (b) in case there are several communications of the same structure. To me, the application described by the original post in this issue seems to fit into this conceptually.
  • The posts by @marcfehling and @sebastian-stark seem to aim for a more low-level solution, as the identification of the cell and face iterators on the other field suggests. This appears to be caused by a need to set up an FEValues or FEFaceValues object to evaluate some integrals, so the aim is to perform the computation completely on one of the two processes at the boundary. For partitioned solution approaches, I would think that FEValues is not necessary in terms of performance, as long as we spend some effort in general-purpose evaluators. Things might be different for monolithic schemes where one wants to explicitly build the linearization with off-diagonal terms in the sparsity pattern. Here, the direct evaluation on the remote side is not possible, and one instead needs to send something like a global dof index and value for all basis functions on field 2, to stay in the language of my previous bullet point. This is complicated for sure, and one might think of a fall-back solution as outlined above. But let's not do this just because we did not think thoroughly about the alternatives.

One might think that the unstructured lookup is very expensive, but as an HPC person I would claim that it can be made pretty efficient. Sure, the owner lookup does some heavy lifting, but we have that in the library (consensus algorithms). Additionally, it must call find_active_cell_around_point (or the variant for all active cells around a point) which can again be expensive. However, if we really care then we can make this very fast by using some appropriate cache and letting the search go through many points nearby at the same time with Mapping::transform_points_real_to_unit_cell. Taken together, this fully unstructured/non-matching case is not all that different from what we do for particles, where we have made huge improvements from many people, spurred by increased functionality and by the new tutorial programs. All of those algorithms should be scalable and run on 10k or more MPI ranks, without the need for a fully shared interface in interface coupled problems or anything else global, so let us not reduce this to "no communication is always better" or the benefits of P2P communication in general as I believe I do understand those topics. Again as an HPC person, I can not make absolute claims regarding performance, so this general approach could be inferior to using more of the problem structure as suggested by @marcfehling's interface. In the interest of scarce resources, I would however advocate to not spend considerable resources in two separate approaches without a thorough understanding of the mutual requirements and possibilities. Or maybe we have spent considerable resources on either side already, but then we should aim for a good user interface - maybe be can unify both cases, so our users only have to look up a single interface. We all know of the challenge to teach people more than one interface.

@sebastian-stark
Copy link
Contributor

@kronbichler My use case is to evaluate boundary/interface integrals, which involve unknowns associated with the boundary/interface as well as unknowns associated with the adjacent volume cell within a monolithic solution approach. So, ultimately, I want to go over each cell of the boundary/interface mesh and assemble the interface terms. The most straightforward approach to make this possible is in my opinion what @marcfehling and @bangerth have suggested. I can see that things can in principle be done the way you suggest; and it would certainly be great to have a solution being as general as possible. But it seems to me that adopting your approach to my use case would be a far bigger challenge compared to what @marcfehling has suggested, and I'm unable to judge whether we would have the resources to make this happen. But let's see what the use cases of others are and what they have to say.

@luca-heltai
Copy link
Member

Let me contribute to the discussion with something that we have used in the past. The context here is that there is a volumetric mesh and a volumetric dof handler (c0_dh) and a boundary mesh, with its codimension one dof handler (c1_dh). The same type of FE space (i.e., FE_Q(degree)) on both grids is constructed.

We extract the boundary mesh from the volumetric mesh, and construct a coupling between the dofs on the two objects. When we need to compute coupling terms, life is a bit more complicated, since quadrature formulas don't have the same ordering (co_cells and c1_faces are not oriented in the same way). We make sure this is taken into account with the snippet at the end.

All of this is inefficient/ugly/and serial. But doable.

#ifndef dealii_dof_tools_coupling_h
#define dealii_dof_tools_coupling_h

#include <deal.II/base/function.h>

#include <deal.II/dofs/dof_handler.h>
#include <deal.II/dofs/dof_tools.h>

#include <deal.II/fe/fe_q.h>
#include <deal.II/fe/fe_system.h>
#include <deal.II/fe/mapping.h>

#include <deal.II/grid/grid_generator.h>
#include <deal.II/grid/grid_in.h>
#include <deal.II/grid/grid_out.h>
#include <deal.II/grid/grid_tools.h>


DEAL_II_NAMESPACE_OPEN

namespace DoFTools
{
  template <int dim, int spacedim>
  std::vector<types::global_dof_index>
  extract_boundary_dof_mapping(
    const std::map<typename Triangulation<dim - 1, spacedim>::cell_iterator,
                   typename Triangulation<dim, spacedim>::face_iterator>
      &                                  c1_to_c0,
    const DoFHandler<dim, spacedim> &    c0_dh,
    const DoFHandler<dim - 1, spacedim> &c1_dh)
  {
    const auto &c0_fe = c0_dh.get_fe();
    const auto &c1_fe = c1_dh.get_fe();

    AssertDimension(c0_fe.dofs_per_face, c1_fe.dofs_per_cell);

    std::vector<types::global_dof_index> dof_indices_c0(c0_fe.dofs_per_face);
    std::vector<types::global_dof_index> dof_indices_c1(c1_fe.dofs_per_cell);

    std::map<types::global_dof_index, Point<spacedim>> c0_points;
    std::map<types::global_dof_index, Point<spacedim>> c1_points;

    DoFTools::map_dofs_to_support_points(MappingQGeneric<dim, spacedim>(1),
                                         c0_dh,
                                         c0_points);

    DoFTools::map_dofs_to_support_points(MappingQGeneric<dim - 1, spacedim>(1),
                                         c1_dh,
                                         c1_points);

    std::vector<types::global_dof_index> c1_to_c0_dofs(c1_dh.n_dofs());
    for (const auto &c1_cell : c1_dh.active_cell_iterators())
      {
        typename DoFHandler<dim, spacedim>::face_iterator c0_face(
          *c1_to_c0.at(c1_cell), &c0_dh);
        c1_cell->get_dof_indices(dof_indices_c1);
        c0_face->get_dof_indices(dof_indices_c0);

        for (unsigned int i = 0; i < c1_fe.dofs_per_cell; ++i)
          {
            bool found = false;
            for (unsigned int j = 0; j < c1_fe.dofs_per_cell; ++j)
              {
                if (c0_fe.face_system_to_component_index(i).first ==
                    c1_fe.system_to_component_index(j).first)
                  if (c0_points[dof_indices_c0[i]].distance(
                        c1_points[dof_indices_c1[j]]) < 1e-10)
                    {
                      c1_to_c0_dofs[dof_indices_c1[j]] = dof_indices_c0[i];
                      found                            = true;
                      break;
                    }
              }
            (void)found;
            Assert(found, ExcInternalError("Dof indices cannot be matched."));
          }
      }
    return c1_to_c0_dofs;
  }
} // namespace DoFTools
DEAL_II_NAMESPACE_CLOSE

We have used this in the past to solve coupled bulk/interface problems for matching problems (with @dpeschka)

We initialize this as

  c1_to_c0 = GridGenerator::extract_boundary_mesh(triangulation, c1_tria);  
  
  // map volume mesh face -> codimension 1 mesh cell
  for (auto c1_cell : c1_tria.active_cell_iterators())
    c0_to_c1[c1_to_c0[c1_cell]] = c1_cell;

  // generate a mapping that maps codimension-1 cells
  // to codimension-0 cells and faces
  for (auto cell :
       triangulation
         .active_cell_iterators())
    for (unsigned int f = 0; f < GeometryInfo<dim>::faces_per_cell; ++f)
      if (cell->face(f)->at_boundary())
        {
          auto c1_cell                      = c0_to_c1[cell->face(f)];
          c1_to_c0_cells_and_faces[c1_cell] = {cell, f};
        }

  c1_to_c0_dofs =
    DoFTools::extract_boundary_dof_mapping(c1_to_c0, c0_dh, c1_dh);

The biggest difficulty here is that the orientation between the cells in the codimension one mesh is not guaranteed to be the same of the faces of the codimension zero mesh. The following is an example to work on for the computation of integrals on matching meshes with different codimensions. After initializing the FEValues on the two dof handlers, we do (very inefficiently) the following:

    for (unsigned int q0 = 0; q0 < n_q_points; ++q0)
        {
          unsigned int q1    = 0;
          bool         found = false;
          for (; q1 < n_q_points; ++q1)
            if (c0_qpoints[q0].distance(c1_qpoints[q1]) < 1e-10)
              {
                found = true;
                break;
              }
          Assert(found, ExcInternalError());
          const double factor = normals0[q0] * normals1[q1]; // check normal orientation
         // Assemble using q1 quadrature index on on co-dimension one, and q0 quadrature index on co-dimension zero

This works in serial, but it is admittedly slow and inefficient.

@bangerth
Copy link
Member Author

I've had a conversation with Michele Bucelli (@michelebucelli) at the workshop in Hannover last week. It turns out that the lifex project has code for this already. From his email:

Here's the link to the lifex core repository: https://gitlab.com/lifex/lifex

The interface handler class and functions are in source/numerics/interface_handler.{hpp,cpp}, and examples are found in tests/multidomain_laplace, examples/multidomain_*

We talked about how this code could be made faster by using bounding boxes and the consensus algorithm. I primarily wanted to make sure the link to the lifex code is recorded here.

@luca-heltai
Copy link
Member

Also there is an open pull request for bulk surface coupling #15773

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants