Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PDE and PDAE interface #997

Open
termi-official opened this issue Nov 9, 2023 · 2 comments
Open

PDE and PDAE interface #997

termi-official opened this issue Nov 9, 2023 · 2 comments

Comments

@termi-official
Copy link

termi-official commented Nov 9, 2023

With this issue I would like to get the discussion about a PDE/PDAE interface for the DifferentialEquations.jl ecosystem started.

I should also note that I might be quite biased, because I am also helping in the development of Ferrite.jl, so I am reliant on information from other PDE solver developers, especially the ones using more specialized discretization techniques.

Problem specification

For completeness let me start by stating the abstract problem: For some "nice" domain $\Omega \times T$ (where T might be empty) we want to approximate a solution $u(x,t)$ the problem $F(x,t,u,\partial_t u, \partial_{x_1} u, \partial_{x_2} u, ..., \partial^n_{x_1} \partial^m_{x_2} u,...) = 0$ with a set of boundary conditions.

For the approximations of such problems we usually use 2 classes of approaches.

A. Direct methods (e.g. space-time finite element discretization for space-time problems, or just classical finite elements for time independent problems) -> Transforms the problem to a (non)linear problem, which we could solve with (Non)linearSolve.jl
B. Partitioned techniques (e.g. only semidiscretize in space with finite elements or semidiscretize in time ) -> Transforms the problem into a PDE/ODE.../ which needs further treatment.

Note: The argument is analogue for e.g. SPDE problems.

Another relevant difficulty is the specific form the the PDE. E.g. strong form? Weak form? Ultra weak form? Mixed form? Dual form? Other form?

"State of the art"

Currently many large frameworks provide suitable functions to evaluate their discretizations which are passed down to DifferentialEquations.jl (e.g. into ODEProblem, have not seen NonlinearProblem yet). I think this is one way to see it. However, in the long run it would be nice to have a common interface to e.g. simplify the comparison of different discretization methods for the same problem without fully implementing everything in different frameworks.

EDIT 1: For visibility I put part of Chris answer here, because I totally forgot about these when writing this up. Sorry!

Note that discussion is missing probably the most important detail which is that there already is a high-level discretization-independent PDE interface which is the PDESystem.

https://docs.sciml.ai/ModelingToolkit/stable/systems/PDESystem

This is how finite difference, finite volume, and physics-informed neural network discretizations all use the same interface.

https://docs.sciml.ai/MethodOfLines/stable/
https://docs.sciml.ai/NeuralPDE/dev/

How it's working was last recorded back in the ModelingToolkit workshop https://www.youtube.com/live/HEVOgSLBzWA?si=JRHE4qro1CKnx_O4&t=1970. I think it needs an updated video that goes into more detail, but it's at least a good overview of the core ideas behind it.

Design considerations

For me the tricky part is always the separation of concerns. I try to summarize what which concrete questions make it difficult for me:

  1. Should we try to squeeze all PDEs into a single PDEProblem? Or should we have multiple ones, but what is the granularity? (Related: What about PDAEs/SDEs/...?)
  2. What is the PDEFunction analogon to an ODEFunction?
  3. How to separate space-time and spatial problems properly? InitialBoundaryValueProblem would be maximally inconsistent as a concrete type. So maybe it should be some different types of FooBarPDEFunctions (not even sure how to name these now :))?
  4. Should the problem get some abstract domain or a concrete mesh and what is the interface?
  5. If we provide a semidiscretization interface, how do we convey information about e.g. variables which decouple spatially for the solvers (xref Nonlinear Solver Interface OrdinaryDiffEq.jl#1570)? And how do we convey boundary condition information (xref Improve pressure handling in incompressible Navier Stokes example Ferrite-FEM/Ferrite.jl#508 ).
  6. How do we check for compatibility between the (semi-)discretization and the given problem?
  7. How do we model initial and boundary conditions? And when to apply them? Just from a given domain and discretization method we may be unable to deduce even the solution vector size.
  8. How does a solution vector look like?

Also a bit further down the road
9. How do we support as many backends as possible and how does the interface look like?

Example and starting point

I think we should start with time-independent problems (but we should not forget about the space-time case).

Let me try to sketch a workflow on a high level (no time dependence), to elaborate why I ended up with the high granularity:

  1. Describe the domain and place markers e.g. for subdomains and boundaries
  2. Generate a suitable discrete representation with the annotations above (e.g. a mesh or a point cloud)
  3. Get the PDEs in a suitable form for each subdomain and boundary (e.g. via MTK)
  4. Construct the PDEProblem with the information above + a chosen discretization method (e.g. something like GridAPGalerkinDiscretization(field1=>(LagrangePolynomials,GaussQuadrature)),...)
  5. solve the nonlinear problem as usual
  6. postprocess the data

More specifically we might think about it like this in some hypothetical code for some basic steady state heat problem

# 1&2
mesh = load_mesh_from_file_with_annotations("foo.mesh") # E.q. a quadrilateral with 
# 3
model = WeakForm(SteadyStateHeatModel(diffusivity=1.0)) # could not figure out how to do it im MTK
boundary_conditions = [Neumann("top",1.0),Dirichlet("bottom",0.0)]
# 4. 
discretization = FerriteGalerkinDiscretization(mesh, :u=>(LagrangePolynomials(order=1),GaussQuadrature(order=2))))
problem = PDEProblem(model, discretization, boundary_conditions)
# 5.
u = solve(problem, UMFPack()) #Generate LinearProblem internally and solve it with UMFPack()
# 6. 
plot_heat_fluxes_at_quadrature_points(u, problem)

Implementing this should be straight forward. However it is not clear what exactly model should be, because the specific model evaluation is highly framework dependent. We might be able to pass some MTK prepresentation, but I am not sure how useful this is, because the frameworks still need to figure out how to discretion the specific form, which can be non-trivial (this is basically what e.g. FEniCS does and even after big efforts it is still limited). We can also pass the complete symbolic representation of the problem around, but we might end up eliminating any framework usage (and performance).

Alternatively we can pass e.g. "element routines" (e.g. what to do on a single finite element for a finite element route). However, this design comes with its own limitations (e.g. cannot handle element patches and handle interfaces). Furthermore, this is not compatible with other approaches like for example generalized finite differences or some finite volume techniques.

Maybe a good starting point is to get a PDEProblem to solve a the heat equation example with some frameworks?

https://ferrite-fem.github.io/Ferrite.jl/dev/tutorials/heat_equation/
https://gridap.github.io/Tutorials/dev/pages/t001_poisson/
https://j-fu.github.io/VoronoiFVM.jl/stable/examples/Example201_Laplace2D/

Ambiguities to consider

We already have https://github.com/SciML/BoundaryValueDiffEq.jl to solve boundary value problems, however it is designed around 1D problems. However, the PDE community often uses the term "boundary value problem" synonymous for a PDE problem with given boundary conditions in any dimension (in contrast to initial value and initial-boundary value problems.

Relevant issues

References

@ChrisRackauckas
Copy link
Member

Note that discussion is missing probably the most important detail which is that there already is a high-level discretization-independent PDE interface which is the PDESystem.

https://docs.sciml.ai/ModelingToolkit/stable/systems/PDESystem

This is how finite difference, finite volume, and physics-informed neural network discretizations all use the same interface.

How it's working was last recorded back in the ModelingToolkit workshop https://www.youtube.com/live/HEVOgSLBzWA?si=JRHE4qro1CKnx_O4&t=1970. I think it needs an updated video that goes into more detail, but it's at least a good overview of the core ideas behind it.

One of the things we are looking to do in the near future (as is described in some of the seminar talks that I give) is to incorporate pseudospectral methods, Trixi, VoronoiFEM, and Ferrite into this same interface. @xtalax is on the path to handle pseudospectral, but it would be nice to get some help on getting the two others in.

The core issue right now is that we do need to be a bit more careful about domain specifications. PDESystem right now uses DomainSets.jl in order to specify the domains of integration, but since all of the solvers we have right now only support hypercube domains we haven't dealt too much with this aspect. I think the core of what we want is this two way street:

  • Specifications via DomainSets.jl, which for example allows you to say "the union between this circle and this square" and have that be a nice mathematical domain.
  • Unstructured domains specified by a common mesh format.

From this specification, a finite difference method can look at that and throw a high level error saying "I only support these domains", while a finite element method would look at a general DomainSet domain and turn it into a mesh before solving.

Now the purpose of the common mesh format is that the symbolic interface is supposed to be solver independent. However, when you have to specify the domain in some "discretized" way, that gets a hairy. This is why a common mesh format would be required since at some level you need to be able to talk about such domains but then have it communicate to any solver. As discussed on Slack, the core point here is that it's likely that any solver will have its own internal formats anyways because it will specialize how it wants to do the compute, and so there will be a step in the symbolic interface where the PDE solver like Ferrite first interprets the symbolic expression and then converts the common mesh into the mesh it wants, and then solves on the converted mesh.

The big aspect that is then has ties to the PDESolution interface (see for example https://docs.sciml.ai/MethodOfLines/stable/tutorials/brusselator/#Extracting-results) so that a user can symbolically interface with the solution, without having to know solver details. This is probably one of the hardest parts 😅 .

(Note that there's also discussions of a different kind of PDESystem, ComponentPDESystem, which we can probably ignore for now since that's more relevant to industrial modeling and would lower to PDESystem tooling)

Should we try to squeeze all PDEs into a single PDEProblem? Or should we have multiple ones, but what is the granularity? (Related: What about PDAEs/SDEs/...?)

What is the PDEFunction analogon to an ODEFunction?

The answer there is effectively, no. It's not possible. The reason is because the functional form can change: a reaction-diffusion-advection equation is described by three functions (MATLAB's pdpde), but then the KS equation needs the 3rd and 4th derivatives, and then you have some nonlinear cases that don't fit in, and ... It's not possible to have an interface that covers all of the possible functions that someone would have to provide on every space, which is why we went back to the drawing board and came back with the symbolic PDESystem interface. In the end, the only general specification is symbolic in some form since that is what could target different functional forms.

How do we check for compatibility between the (semi-)discretization and the given problem?

This is done in the symbolic_discretize / discretize dispatches.

Every PDE is a PDAE though BTW, with the boundary conditions algebraic equations. Standard methods simply perform tearing on the algebraic equations in order to get an ODE, but with ModelingToolkit style transformations you can naively discretize ignoring the BCs and then apply the tearing pass and automatically get the same ODE by reducing the algebraic equations. You can show for example things like ghost points are exactly equivalent to this method. So using ModelingToolkit in this way makes the generalization of PDEs to PDAEs trivial, since the DAEs are just handled the same way as BCs and the BCs require no special handling.

For SPDEs, there's @brownian variables in MTK. Then delays are representable as well.

But yes, there can be other ways of representing the PDE systems. PDESystem is effectively the strong form in most cases we've dealt with so far, but a compiler pass can be written to transform a PDESystem in the strong from to the weak form by multiplying by a set of test functions, etc. All that a discretization method would then have to do is recognize whether it can discretize the current PDESystem that is given. Whether that needs a type tag on it PDESystem vs WeakFormPDESystem is something I'm not certain about right now.

For the approximations of such problems we usually use 2 classes of approaches.

A. Direct methods (e.g. space-time finite element discretization for space-time problems, or just classical finite elements for time independent problems) -> Transforms the problem to a (non)linear problem, which we could solve with (Non)linearSolve.jl
B. Partitioned techniques (e.g. only semidiscretize in space with finite elements or semidiscretize in time ) -> Transforms the problem into a PDE/ODE.../ which needs further treatment.

Note that's not quite right. These are "the same thing" in the sense of a PDESystem. The PDESystem interface is all about discretizing a high level PDE description into any of the computable mathematical forms. The computable mathematical forms are things like NonlinearProblem, ODEProblem, SDEProblem, OptimizationProblem, etc. So for example, in this higher level idea of "a discretization is a compiler transformation of PDESystem into XSystem", you have relations like:

  • A physics-informed neural network "discretization" is the transformation of a PDESystem into an OptimizationSystem
  • The "direct methods" are discretizations of PDESystem into a NonlinearSystem (or BVPSystem, it should actually be a BVP in a lot of cases, but that's a different discussion)
  • Feynmann-Kac and things like that are "discretizations" of PDESystem into SDESystem
  • Method of Characteristics is a "discretization" of a PDESystem into an ODESystem

What is important here is that the PDESystem is transformed into a numerically solvable system type, with observable attributes so that sol[u(x,t)] can be interpreted back from that solution. So it really doesn't matter if it's solved by an ODE, SDE, optimization, nonlinear solve, etc., a "discretizer" is a map from a symbolic PDE to another system type which carries a reverse map of how to interpret the solution as a representation of the PDE's solution.

For PDESystem, there are then two things:

  • symbolic_discretize: represent this discretization in its symbolic form
  • discretize: give me the solvable object for this transformation

The reason why discretize is provided is that in some (right now many) cases, there are PDE discretizations which can be hard to formally specify in what we have in ModelingToolkit (and have it compile fully optimally). That is changing over time, but for example OptimizationSystem cannot handle the loss function specifications of physics-informed neural networks well so for now it's symbolic form is a bit wonky. But that's the general idea there.

So for example, if someone has a weird PDE discretization they want to build that "requires" lowering to a loop, in the parlance of this general system that's "just" a discretizer which takes in a PDESystem and spits out a DiscreteSystem or a DiscreteProblem. For debugging purposes, it's always good to have symbolic_discretize exist, though for some of these cases someone may want to do a direct codegen with discretize. But even then "here let me write the loop" form has a representation in SciML, it's just a DiscreteProblem and the same PDESolution can be wrapped over a DiscreteSolution etc. in this form.

How to separate space-time and spatial problems properly? InitialBoundaryValueProblem would be maximally inconsistent as a concrete type. So maybe it should be some different types of FooBarPDEFunctions (not even sure how to name these now :))?

I think this is covered by the previous answer.

Construct the PDEProblem with the information above + a chosen discretization method (e.g. something like GridAPGalerkinDiscretization(field1=>(LagrangePolynomials,GaussQuadrature)),...)
solve the nonlinear problem as usual
postprocess the data

I'll just point to https://docs.sciml.ai/MethodOfLines/stable/tutorials/heatss/ as an example of doing exactly that in the high level interface.

If we provide a semidiscretization interface, how do we convey information about e.g. variables which decouple spatially for the solvers (xref SciML/OrdinaryDiffEq.jl#1570)? And how do we convey boundary condition information (xref Ferrite-FEM/Ferrite.jl#508 ).

If you take a look at MethodOfLines.jl in its specification, you have to choose which independent variables to discretize. If you choose to discretize all of the variables, you just get a NonlinearSystem instead of an ODESystem. So this kind of thing should be handled by the discretizer.

How do we model initial and boundary conditions? And when to apply them? Just from a given domain and discretization method we may be unable to deduce even the solution vector size.

One disadvantage of going purely symbolic is that sometimes people want to specify these things with data (as a vector), but in order to be completely general (since you don't know what the discretizer will do) you do have to specify these things continuously. That's not the worst thing in the world, you simply have to put an interpolation over the data you wish to specify. An example of that is shown here: https://docs.sciml.ai/MethodOfLines/stable/tutorials/icbc_sampled/.

How does a solution vector look like?

So then as you can see from the previous portions, what's required is that a re-interpretation map is provided with the solution. This is what the PDESolution does in SciML. You can effectively tag a generated problem with the information saying "this ODEProblem came from a PDE", and then after the solve it'll know to automatically wrap the ODESolution into a PDESolution where a dispatch of PDESolution(prob, pdesys, discretizer) needs to be defined on the discretizer. We still need to generalize this so that any SciML problem can do this (so for example, NonlinearProblem, OptimizationProblem, etc.), but that's how you can effectively allow the interface to be that the user gets an XProblem, chooses how to solve it, but no matter how they solve it all discretizations give back the same representation with a symbolic indexing interface on it (the actual internal structures on it will be different, but sol[u(x,t)] is the same between all of them.

Implementing this should be straight forward. However it is not clear what exactly model should be, because the specific model evaluation is highly framework dependent. We might be able to pass some MTK prepresentation, but I am not sure how useful this is, because the frameworks still need to figure out how to discretion the specific form, which can be non-trivial (this is basically what e.g. FEniCS does and even after big efforts it is still limited). We can also pass the complete symbolic representation of the problem around, but we might end up eliminating any framework usage (and performance).

Right now it has seemed to be "fine", but yes as we grow the discretization set it may be difficult. I think one nice way to handle this would be for each discretizer package to have a system type. So for example, MethodOfLines should probably add an extra stage in here where it's PDESystem -> FiniteDifferencePDESystem -> ODEProblem. The intermediate is effectively done internally right now, where it parses the symbolic form into a form that's like "here's how I interpret this PDE", where it finds what each of the operators are and then discretizes each operator. But that means there's a different way in theory to interact with it here, which is "pass me a list of symbolic operators and a definition of how they connect", i.e. this lowered form.

If we let users target that lowered form, the connection there would not be discretizer independent, but could expose the full functionality of the discretizer.

Alternatively we can pass e.g. "element routines" (e.g. what to do on a single finite element for a finite element route). However, this design comes with its own limitations (e.g. cannot handle element patches and handle interfaces). Furthermore, this is not compatible with other approaches like for example generalized finite differences or some finite volume techniques.

Those are "just" different discretize dispatches.

So, what's next?

What I'm trying to say here is that, we have done a ton with PDESystem / PDESolution and it seems to be working. We wanted to get that working well enough before taking the next step, and I think we're at the right point to be doing that transition.

The next step of course is a giant leap, and that's go beyond what we have with MethodOfLines.jl and NeuralPDE.jl and start incorporating Trixi.jl, Ferrite.jl, VoronoiFVM.jl, and start allowing discretize dispatches that support arbitrary meshes. That's going to be a major leap, but I've been talking with @xtalax about it already and we were going to start doing that very soon. In fact, we have a current grant in which we have to be doing this 😅 , so the process has already started. We would very much appreciate your help getting Ferrite into this system if you're willing to go along with the ride.

Maybe a good starting point is to get a PDEProblem to solve a the heat equation example with some frameworks?

Yes, that's the next step here. What we're trying to support are the following interfaces testing on the heat equation:

  • Finite difference (MethodOfLines.jl)
  • Pseudospectral, done by using ApproxFun internally (this is likely to be an extension in MethodOfLines.jl for how the operators are lowered)
  • Trixi.jl (discrete galerkin)
  • VoronoiFVM.jl (finite volume)
  • Ferrite.jl (FEM)
  • Decapodes.jl (discrete exterior calculus)
  • Mesh-free solvers (NeuralPDE.jl)

Due to grant obligations, we're working on the Decapodes.jl one ASAP (that's already started), and that's one that requires handling unstructured meshes so that's why this process is already underway. So any help to get Ferrite into the same form could be tremendously helpful.

@termi-official
Copy link
Author

termi-official commented Nov 10, 2023

Thanks for such a detailed answer Chris. I have added the first paragraph as citation in my answer for visibility. I really forgot about these packages, sorry.

I will comment on some parts. Skipped parts are not ignored but I just agree then. :)

The core issue right now is that we do need to be a bit more careful about domain specifications. PDESystem right now uses DomainSets.jl in order to specify the domains of integration, but since all of the solvers we have right now only support hypercube domains we haven't dealt too much with this aspect. I think the core of what we want is this two way street:

Specifications via DomainSets.jl, which for example allows you to say "the union between this circle and this square" and have that be a nice mathematical domain.
Unstructured domains specified by a common mesh format.

From this specification, a finite difference method can look at that and throw a high level error saying "I only support these domains", while a finite element method would look at a general DomainSet domain and turn it into a mesh before solving.

Now the purpose of the common mesh format is that the symbolic interface is supposed to be solver independent. However, when you have to specify the domain in some "discretized" way, that gets a hairy. This is why a common mesh format would be required since at some level you need to be able to talk about such domains but then have it communicate to any solver. As discussed on Slack, the core point here is that it's likely that any solver will have its own internal formats anyways because it will specialize how it wants to do the compute, and so there will be a step in the symbolic interface where the PDE solver like Ferrite first interprets the symbolic expression and then converts the common mesh into the mesh it wants, and then solves on the converted mesh.

You are probably aware, but I should note it anyway. Discretizing the geometry can be quickly a really hard issue, especially in 3D, and not done properly this will negatively impact the downstream performance. We also unfortunately lack infrastructure in Julia do to this (which is related to the issue that we don't have a nice common mesh interface). I think GridAP, Ferrite and Trixi can all handle Gmsh (which is commonly used for FEM in the groups I know), so maybe here it might be a starter to see how we could interface between DomainSets.jl. With the OpenCascade we should be able to generate at least with mediocre effort nice tetrahedral grids, but for this we need to update the Julia Gmsh package (update: works already). If we want to invest into Meshes.jl we need someone who takesthe step to dump time to develop the 3D meshing algorithms. Some algorithms exists in Meshes.jl, but they construct discretizations which are not aimed to be used for numerical methods. However, maybe passing a custom Meshes.jl mesh could be a compromise (custom because Meshes.jl does not have a data structure with annotations yet https://github.com/JuliaGeometry/Meshes.jl/tree/06eeda8d54540b585db17ac8ae930d3ea1884bba/src/mesh). And for completeness I also want to note that there is current to the best of my knowledge no support for nonlinear geometries (e.g. quadratic Lagrangian tetrahedra).

However, we for the first steps we can constrain ourselves to the geometries provided by DomainSets.jl, because every framework can easily deal with the discretization of these by themselves. I think this is reasonable to get the discussion going and to work our way towards answering how to deal with this part of the pipeline.

The big aspect that is then has ties to the PDESolution interface (see for example https://docs.sciml.ai/MethodOfLines/stable/tutorials/brusselator/#Extracting-results) so that a user can symbolically interface with the solution, without having to know solver details. This is probably one of the hardest parts 😅 .

Not sure if this is really hard, but maybe I am missing something. Let me elaborate. At least for Ferrite speaking we have infrastructure to evaluate the solution at arbitrary points. I haven't implemented this part, but basically we build a KDTree and then search for N candidate elements. For each candidate you then have to either solve a linear or nonlinear problem (if the geometry is linear or nonlinear) and then you can already evaluate. This should be not hard to port between other frameworks, if they don't provide something similar by themselves. For the temporal interpolation we should be able to just use whatever the attached time integrator provides. Not recommended to do visualization with tho. Each framework has at least a basic visualizer and I am actively working on unifying these (in Makie), but this will likely take some more months, because of missing spare time and hands. Still such an eval is still very helpful for postpressing and debugging purposes.

Should we try to squeeze all PDEs into a single PDEProblem? Or should we have multiple ones, but what is the granularity? (Related: What about PDAEs/SDEs/...?)

What is the PDEFunction analogon to an ODEFunction?

The answer there is effectively, no. It's not possible. The reason is because the functional form can change: a reaction-diffusion-advection equation is described by three functions (MATLAB's pdpde), but then the KS equation needs the 3rd and 4th derivatives, and then you have some nonlinear cases that don't fit in, and ... It's not possible to have an interface that covers all of the possible functions that someone would have to provide on every space, which is why we went back to the drawing board and came back with the symbolic PDESystem interface. In the end, the only general specification is symbolic in some form since that is what could target different functional forms.

I think I disagree with parts of the answer here, but I think this stems from me being not precise in the question here. It should not be impossible. We could bring the same argument for time dependent problems (ODEs/SDEs/...), where we have a very nice classification and the trick to reduce them to first order ODEs. However, for PDEs we can apply a similar trick to get systems of first order PDEs. The evaluation function can be handled analogue to how you handle DAEs in the ecosystem, i.e. evaluating the residual by via $F(u, \partial u, \partial^2 u ... )$. It is often (but not always) just better to not split into first order PDEs.

On the other side I definitely agree on the point that symbolic representations can be super useful!

Every PDE is a PDAE though BTW, with the boundary conditions algebraic equations. [...]

I see and it kinda sense for me. I treat them differently, because for boundary conditions it is usually clear (in finite elements at least) what to do in the (semi-)discretization. Not disagreeing on this point tho.

For the approximations of such problems we usually use 2 classes of approaches.

A. Direct methods (e.g. space-time finite element discretization for space-time problems, or just classical finite elements for time independent problems) -> Transforms the problem to a (non)linear problem, which we could solve with (Non)linearSolve.jl
B. Partitioned techniques (e.g. only semidiscretize in space with finite elements or semidiscretize in time ) -> Transforms the problem into a PDE/ODE.../ which needs further treatment.

Note that's not quite right. These are "the same thing" in the sense of a PDESystem. The PDESystem interface is all about discretizing a high level PDE description into any of the computable mathematical forms. The computable mathematical forms are things like NonlinearProblem, ODEProblem, SDEProblem, OptimizationProblem, etc. [...]

Thanks for the detailed view here. I think I get your point here, that the common (and unquestionably important) thing is that we need to track and convey information about the PDE into all subproblems, no matter how we (semi-)discretize. And I think we basically agree on the high level, we have to track the discretization information recursively. I have to think more about this, whether leaving some derivatives intact is really something special and how it may affect internal treatment.

We would very much appreciate your help getting Ferrite into this system if you're willing to go along with the ride.

Sure. Happy to help here. I just want to note transparently that my spare time is currently a bit limited, because I agreed to help in several projects already, but I think that I still should be able to manage to fit this. I have already started working on a Symbolics.jl utility layer on top of Ferrite, but it is still super limited. I could only manage to get some simple 1D problem working for now. I haven't bothered to open issues for the problems I encountered, because they basically boil down to JuliaSymbolics/Symbolics.jl#973 .

Copy pasta of my work below in the spoiler. Maybe this can be helpful as a starter. My idea was basically to start in the inner loops and work my way "to the oudside", but I never came very far.

It follows a Poisson problem in 1D, which is the translation of the first tutorial https://ferrite-fem.github.io/Ferrite.jl/dev/tutorials/heat_equation/ . This example requires the master branch of Ferrite.

using Ferrite, ModelingToolkit, LinearAlgebra, SparseArrays
import Symbolics: ClosedInterval
using SymbolicUtils

# TODO correct definitions.
function grad(var)
    Differential(x₁)(var)
end

# Element
@parameters x₁
Ωₑ = (x₁  ClosedInterval(-1,1)) # TODO replace with RefLine= Integral(Ωₑ)

extract_integrand = @rule (~ex) => ~ex
recover_cdot = @rule ~x * ~y => ~x  ~y

# Bilinear form
a(u,v) = (grad(u)  grad(v))
# Linear form
@parameters f
b(v) = (1.0*v)

@variables u(..) δu(..)
# System
eqs_weak = [a(u(x₁),δu(x₁)) ~ b(δu(x₁))]

grid = generate_grid(Line, (20,));

ip = Lagrange{RefLine, 1}()
qr = QuadratureRule{RefLine}(2)
cellvalues = CellValues(qr, ip);

dh = DofHandler(grid)
add!(dh, :u, ip)
close!(dh);

K = create_sparsity_pattern(dh);

ch = ConstraintHandler(dh);
∂Ω = union(
    getfaceset(grid, "left"),
    getfaceset(grid, "right"),
);
dbc = Dirichlet(:u, ∂Ω, (x, t) -> 0)
add!(ch, dbc);
close!(ch)

function assemble_element!(Ke::Matrix, fe::Vector, cellvalues::CellValues)
    n_basefuncs = getnbasefunctions(cellvalues)
    ## Reset to 0
    fill!(Ke, 0)
    fill!(fe, 0)
    ## Loop over quadrature points
    for q_point in 1:getnquadpoints(cellvalues)
        ## Get the quadrature weight= getdetJdV(cellvalues, q_point)
        ## Loop over test shape functions
        for i in 1:n_basefuncs
            δu  = shape_value(cellvalues, q_point, i)
            ∇δu = shape_gradient(cellvalues, q_point, i)
            ## Add contribution to fe
            fe[i] += δu *## Loop over trial shape functions
            for j in 1:n_basefuncs
                ∇u = shape_gradient(cellvalues, q_point, j)
                ## Add contribution to Ke
                Ke[i, j] += (∇δu  ∇u) *end
        end
    end
    return Ke, fe
end

function assemble_global(cellvalues::CellValues, K::SparseMatrixCSC, dh::DofHandler)
    ## Allocate the element stiffness matrix and element force vector
    n_basefuncs = getnbasefunctions(cellvalues)
    Ke = zeros(n_basefuncs, n_basefuncs)
    fe = zeros(n_basefuncs)
    ## Allocate global force vector f
    f = zeros(ndofs(dh))
    ## Create an assembler
    assembler = start_assemble(K, f)
    ## Loop over all cels
    for cell in CellIterator(dh)
        ## Reinitialize cellvalues for this cell
        reinit!(cellvalues, cell)
        ## Compute element contribution
        assemble_element!(Ke, fe, cellvalues)
        ## Assemble Ke and fe into K and f
        assemble!(assembler, celldofs(cell), Ke, fe)
    end
    return K, f
end

function assemble_element!(Ke::Matrix, fe::Vector, cellvalues::CellValues, eq::Equation)
    n_basefuncs = getnbasefunctions(cellvalues)
    ## Reset to 0
    fill!(Ke, 0)
    fill!(fe, 0)
    ## Loop over quadrature points
    for q_point in 1:getnquadpoints(cellvalues)
        ## Get the quadrature weight= getdetJdV(cellvalues, q_point)
        ## Loop over test shape functions
        for i in 1:n_basefuncs
            δuᵢ  = shape_value(cellvalues, q_point, i)
            ∇δuᵢ = shape_gradient(cellvalues, q_point, i)
            ## Add contribution to fe
            fe[i] += Symbolics.value(substitute(eq.rhs, Dict(δu(x₁) => δuᵢ))) *## Loop over trial shape functions
            for j in 1:n_basefuncs
                ∇uᵢ = shape_gradient(cellvalues, q_point, j)
                ## Add contribution to Ke
                Ke[i, j] += Symbolics.value(substitute(eq.lhs, Dict(grad(u(x₁)) => ∇uᵢ[1], grad(δu(x₁)) => ∇δuᵢ[1]))) *end
        end
    end
    return Ke, fe
end

function assemble_global(cellvalues::CellValues, K::SparseMatrixCSC, dh::DofHandler, eq::Equation)
    eq_element = extract_integrand(eqs_weak[1].lhs) ~ extract_integrand(eqs_weak[1].rhs)
    ## Allocate the element stiffness matrix and element force vector
    n_basefuncs = getnbasefunctions(cellvalues)
    Ke = zeros(n_basefuncs, n_basefuncs)
    fe = zeros(n_basefuncs)
    ## Allocate global force vector f
    f = zeros(ndofs(dh))
    ## Create an assembler
    assembler = start_assemble(K, f)
    ## Loop over all cels
    for cell in CellIterator(dh)
        ## Reinitialize cellvalues for this cell
        reinit!(cellvalues, cell)
        ## Compute element contribution
        assemble_element!(Ke, fe, cellvalues, eq_element)
        ## Assemble Ke and fe into K and f
        assemble!(assembler, celldofs(cell), Ke, fe)
    end
    return K, f
end

K, f = assemble_global(cellvalues, K, dh);

apply!(K, f, ch)
u₁ = K \ f;

K, f = assemble_global(cellvalues, K, dh, eqs_weak[1]);
apply!(K, f, ch)
u₂ = K \ f;

@test u₁  u₂


eq_qp_lhs = extract_integrand(eqs_weak[1].lhs)
@variables ∇δuᵢ ∇uⱼ
eq_qp_lhs_2 = substitute(eq_qp_lhs, Dict(grad(u(x₁)) => ∇uⱼ, grad(δu(x₁)) => ∇δuᵢ))
bilinearform_evaluation = build_function(eq_qp_lhs_2, ∇δuᵢ, ∇uⱼ, expression = Val{false})

eq_qp_rhs = extract_integrand(eqs_weak[1].rhs)
@variables δuᵢ
eq_qp_rhs_2 = substitute(eq_qp_rhs, Dict(δu(x₁) => δuᵢ))
linearform_evaluation = build_function(eq_qp_rhs_2, δuᵢ, expression = Val{false})

function assemble_element!(Ke::Matrix, fe::Vector, cellvalues::CellValues, bifo::typeof(bilinearform_evaluation), lifo::typeof(linearform_evaluation))
    n_basefuncs = getnbasefunctions(cellvalues)
    ## Reset to 0
    fill!(Ke, 0)
    fill!(fe, 0)
    ## Loop over quadrature points
    for q_point in 1:getnquadpoints(cellvalues)
        ## Get the quadrature weight= getdetJdV(cellvalues, q_point)
        ## Loop over test shape functions
        for i in 1:n_basefuncs
            ## Linear form
            uᵢ  = shape_value(cellvalues, q_point, i)
            fe[i] += lifo(uᵢ) *## Bilinear form
            ∇uᵢ = shape_gradient(cellvalues, q_point, i)
            ## Loop over trial shape functions
            for j in 1:n_basefuncs
                ∇uⱼ = shape_gradient(cellvalues, q_point, j)
                ## Add contribution to Ke
                Ke[i, j] += bifo(∇uᵢ[1], ∇uⱼ[1]) *end
        end
    end
    return Ke, fe
end

function assemble_global(cellvalues::CellValues, K::SparseMatrixCSC, dh::DofHandler, bifo::typeof(bilinearform_evaluation), lifo::typeof(linearform_evaluation))
    ## Allocate the element stiffness matrix and element force vector
    n_basefuncs = getnbasefunctions(cellvalues)
    Ke = zeros(n_basefuncs, n_basefuncs)
    fe = zeros(n_basefuncs)
    ## Allocate global force vector f
    f = zeros(ndofs(dh))
    ## Create an assembler
    assembler = start_assemble(K, f)
    ## Loop over all cels
    for cell in CellIterator(dh)
        ## Reinitialize cellvalues for this cell
        reinit!(cellvalues, cell)
        ## Compute element contribution
        assemble_element!(Ke, fe, cellvalues, bifo, lifo)
        ## Assemble Ke and fe into K and f
        assemble!(assembler, celldofs(cell), Ke, fe)
    end
    return K, f
end

K, f = assemble_global(cellvalues, K, dh, bilinearform_evaluation, linearform_evaluation);
apply!(K, f, ch)
u₃ = K \ f;

@test u₁  u₃

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants