Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DRAFT] Towards Multimaterial Simulations #932

Draft
wants to merge 92 commits into
base: master
Choose a base branch
from

Conversation

davschneller
Copy link
Contributor

@davschneller davschneller commented Aug 29, 2023

This PR is more or less meant as a suggestion and hopefully also as a tracking issue—it's definitely not ready for review or anything yet. (first drafts for this issue started somewhen rather shortly after #829 was completely done) Hopefully, it will be done in some months. But—if we don't merge it, it's fine as well.

It begins to work on multimaterial simulations, but it doesn't compile by now. Also, next to nothing has been tested. But if it even remotely works, we may soon also get rid of re-compiling SeisSol for each new equation, as a side effect.

What it does in short: each cell gets a configuration. Currently (that can be changed), the configuration is determined by the group of an element, and the configurations are specified in a file of their own. Right now, it's supposed to have the following YAML format:

- group: 0
  material: elastic
  precision: float                  # precision: either double or float, soon maybe also others?
  order: 4                          # polynomial order
  plasticity: true                  # right now, only true and false. Maybe different types of plasticity soon-ish?
  model: elastic-parameters.yaml    # each group may now have a material file on its own
- group: 1
  material: viscoelastic-3          # here, 3 is the number of mechanisms ... TODO: add centralFreq here?
  precision: double
  order: 6
  plasticity: false
  model: viscoelastic-parameters.yaml

And so on. The following features (most of them untested by now, unfortunately):

  • The configurations need to be explicitly specified at compile-time, as the succeed the current preprocessor variables (cf. the file src/Common/cellconfig.hpp). There's more template metaprogramming in src/Common by now. A configuration itself is an empty struct with a bunch of template arguments (i.e. Material, precision (real type), order, and plasticity) that is only used to select the appropriate types using std::visit or some auto-parametrized lambda. And, it's shorter than writing out four different arguments for each class (or using a preprocessor define to hide that). Thus, we can always just do template<typename Config> class SomeKernel{...};, and then use typename Config::MaterialT, typename Config::RealT, Config::ConvergenceOrder, Config::Plasticity to access the arguments.
  • Also, everything needs to be templated with a single Config type.
  • Material classes now have a bit of meta information attached to them, including (but not limited to) the number of quantities, a textual name etc.
  • The parameter DB has been rewritten to support querying multiple, different materials at once.
  • The kernels in the TimeCluster class have been moved out of there, and are now placed in the WaveProp folder. In particular, the TimeClusters retain all their actor logic.
  • The kernels from the Equations got all symlinks removed.
  • The Yateto kernels are now templated. (the current solution is not ideal however, as we run Yateto multiple times, once for each configuration)
  • A preliminary (and rather untested) neighbor time integral kernel has been added that supports different orders and precisions. Though it would be best to add that to Yateto maybe? It's however only a multiplication with a constant right now. ... Right now, the neighbor interaction works as follows (again, everything open to change): we convert the neighboring material to our own (that has to be defined somehow... I.e. we need to convert all material types from one to another, including anisotropic<->viscoelastic etc.). Then, during neighbor time integration, we convert the neighboring cell data to the format of the local cell, including the adjustment of quantities or order, as mentioned above. Then, we may apply the neighbor kernel without a change. ... Does that make sense?
  • The LTSTree has, for now, been encapsuled by a class named LTSForest which, in effect, spawns one LtsTree for each configuration, and goes over them in a visitor-like pattern: i.e. we need a ton of lambdas with auto parameters. And for some (like CellLocalInformation), that may not be necessary... We'll see.
  • Right now, it's planned to have one TimeCluster per configuration—then all elements in a time cluster still are exactly identical. But maybe it's better to group them all?

The following is currently in progress:

  • Templating everything, and adding prototype definitions where needed. Currently, I use a (in my opinion) sub-optimal approach here: define a function which simply uses a templated object, e.g. Kernel<Config>, and creates an empty function with Kernel<Config1>, Kernel<Config2>, ... etc. as arguments. I'm not yet certain if it'll work, but it's not too nice even if it does. ... Alternatively, we could use std::visit, maybe.
  • rewrite LtsLayout (and partially, MemoryManager) to support the new structures. Heavily WIP, as it's kind of big. The LtsLut is currently being restructured to involve a visitor pattern when calling the lookup function.
  • make everything compile... (which may take a bit)
  • cleanup

The following will need to be done:

  • Figure out how to do the memory management properly. Most likely, it would be best to have a layered memory storage, i.e. to give parameters like LayeredStorage<ConfigLayer, IndexedLayer, InteriorGhostCopyLayer>, and then to automatically generate types for all related configs and combinations with ghost, copy, and interior layer (can mneme do that?)—instead of relying on masking the respective things. If we have that, we could also support different dynamic rupture models in a single simulation (--again?)—if that's needed. ... Generally, can mneme help us in this direction maybe?
  • Proper neighbor interaction—what is even needed here?
  • Maybe: basic support for more precisions than just double and float—would that make sense? (e.g. bfloat16, half, float128 etc.)
  • Test (which may involve writing many tests: everything should still work as before)
  • Discuss the parameter format, cf. Moving away from Fortran namelists - A proposal #433 . How to include the material groups?
  • IO and checkpointing (future issue?).

Anyways, I know this PR is far too big; so I offer to split it into multiple parts, or open up a development branch which can also be used to work on the rewrite of the IO and checkpointing functionality in more PRs of their own.
Feel free to comment and criticize everything, or even if you think that it will not work out at all.

@sebwolf-de
Copy link
Contributor

Hi,
just a few remarks:

  • The configurations need to be explicitly specified at compile-time: Wouldn't it be good to have all possibilities in one binary? Otherwise users have to compile SeisSol again, if they want to change from "Viscoelastic O6 coupled to Elastic O4" to "Viscoelastic O6 coupled to Elastic O6"?
  • I don't think the "convert neighboring material to our own" approach will always work. Might work for elastic-viscoelastic and elastic-anistropic. But for acoustic and poroelastic it is definitely not straightforward, as you have to solve more complicated Riemann problems. See e.g. Carsten's Dissertation section 4.2.3.
  • Add an "acoustic" material model.

so I offer to split it into multiple parts: My suggestion:

  • Include all orders/material-models in one binary (can be two PRs, one for material, one for order)
  • Add the simple couplings (viscoelastic-elastic, elastic-anisotropic). In the first case, the memory variables are not relevant for the Riemann problem, so you only have to deal with stresses and velocities at the interface. In the second case, elastic is just a specialization of anisotropic, so you can actually compute anisotropic on both sides.
  • Add "acoustic" material model.
  • Add acoustic-elastic interface.
  • Add poroelastic-x interface(s).

@krenzland
Copy link
Contributor

krenzland commented Sep 6, 2023

The configurations need to be explicitly specified at compile-time: Wouldn't it be good to have all possibilities in one binary? Otherwise users have to compile SeisSol again, if they want to change from "Viscoelastic O6 coupled to Elastic O4" to "Viscoelastic O6 coupled to Elastic O6"?

I'm afraid this is the only realistic option considering the current speed of yateto. Even currently, on some machines, the code generation takes more time than compiling the rest of SeisSol.
We can make this a bit better by parallelizing parts of yateto (which is relatively simple for the backend) but that won't be good enough probably to have all coupling methods enabled by default.

I don't think the "convert neighboring material to our own" approach will always work. Might work for elastic-viscoelastic and elastic-anistropic. But for acoustic and poroelastic it is definitely not straightforward, as you have to solve more complicated Riemann problems. See e.g. Carsten's Dissertation section 4.2.3.

Yup. Well, it works for elastic-acoustic if you modify the Riemann solver accordingly.

EDIT: You can also check my thesis draft, I state the Godunov states explicitly for the coupling and explain, what data is needed.

Add "acoustic" material model.

Note that this opens up another problem: Output. Acoustic only has 4 unknowns (and a different sign convention). This means that we would need to have capability of using two different output writers for different sections of the mesh.

@davschneller
Copy link
Contributor Author

davschneller commented Sep 6, 2023

The configurations need to be explicitly specified at compile-time: Wouldn't it be good to have all possibilities in one binary? Otherwise users have to compile SeisSol again, if they want to change from "Viscoelastic O6 coupled to Elastic O4" to "Viscoelastic O6 coupled to Elastic O6"?

I'm afraid this is the only realistic option considering the current speed of yateto. Even currently, on some machines, the code generation takes more time than compiling the rest of SeisSol. We can make this a bit better by parallelizing parts of yateto (which is relatively simple for the backend) but that won't be good enough probably to have all coupling methods enabled by default.

See SeisSol/yateto#62 , though I've only tested it for four materials and one order at a time for now (the core still does not compile yet), and without subroutines. It is extremely slow, but it works. Maybe we can pre-run some optimizations in Yateto? Or still optimize something? Alternatively, we can run Yateto multiple times, CMake/i.e. make/ninja should be able to parallelize that.

The plan would be to ultimately offer a single SeisSol binary.

I don't think the "convert neighboring material to our own" approach will always work. Might work for elastic-viscoelastic and elastic-anistropic. But for acoustic and poroelastic it is definitely not straightforward, as you have to solve more complicated Riemann problems. See e.g. Carsten's Dissertation section 4.2.3.

Yup. Well, it works for elastic-acoustic if you modify the Riemann solver accordingly.

EDIT: You can also check my thesis draft, I state the Godunov states explicitly for the coupling and explain, what data is needed.

Yup, that's where I had noticed my fallacy... :/ Will need some re-design later. For now at least, I'll let everything focus on just order/precision then.

Add "acoustic" material model.

A temporary material definition exists in src/Equations/elastic/datastructures.hpp, but it's incredibly temporary. No methods, no kernels, nothing else.

Note that this opens up another problem: Output. Acoustic only has 4 unknowns (and a different sign convention). This means that we would need to have capability of using two different output writers for different sections of the mesh.

I know, that's another thing why it's good to have the output/checkpointing rewrite soon-ish. ;)

so I offer to split it into multiple parts: My suggestion:

* Include all orders/material-models in one binary (can be two PRs, one for material, one for order)

Probably that will be in one go. Right now, the biggest challenge is to re-do the core (LtsLayout).
We may also have to think about the memory a bit more. Right now, it's LtsTree plus templating mechanisms on top of it (called LtsForest, haha)—but we could take or imitate something like mneme there if wanted. In the end, we just need some layered storage datastructure in the end.

* Add the simple couplings (viscoelastic-elastic, elastic-anisotropic). In the first case, the memory variables are not relevant for the Riemann problem, so you only have to deal with stresses and velocities at the interface. In the second case, elastic is just a specialization of anisotropic, so you can actually compute anisotropic on both sides.

* Add "acoustic" material model.

* Add acoustic-elastic interface.

* Add poroelastic-x interface(s).

Ok. Sounds good. Probably, we can have a templated transposedGodunovState method or so which then accepts two different materials. Which we continue to extend. Or something like that.

@sebwolf-de
Copy link
Contributor

I'm afraid this is the only realistic option considering the current speed of yateto.

Parallelization should be easy, since Material/Order are independent. With 5 material models and 4 orders, you can generate the code in parallel on 20 cores, in the same time. Compilation time rises though, since we already use the parallelism from make.

Note that this opens up another problem: Output

The same problem holds for poroelasticity with 13 quantities.

@davschneller
Copy link
Contributor Author

Regarding output again: we should be able to at least get the receivers and energy working without too many issues, I'd think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants