This project consists in a library solver_tools
and a bunch of python scripts.
The library aims at
- solving "easy" 1d partial differential equations (pdes),
- controlling the pdes to reach a user-defined objective (e.g. suppression of oscillations)
- doing data-assimilation (i.e., calibrate model parameters or initial conditions) for education and research.
Forecast models usually contain unknown parameters. It can be
- the weather
- the rate of propagation of a disease
- some economic activities Practically speaking, these parameters may be
- the model's initial conditions ("
")
- the model's boundary conditions ("
" and "
")
- the model's parameters (
in the heat equation)
The project consists in a library and a collection of scripts that illustrates it.
solver_tools.py
It contains:
- a function to integrate the equation
- discretization of:
- cfl (compute and print the cfl number)
- functions to compute the adjoint of the pdes in order to compute gradient of possibly complex costs functions
- data assimilation (identify initial conditions that allows to fit the model on the data available)
- model assimilation (identify model parameters that allows to fit the model on the data available)
Other functions will be implemented, to derive reduced order models (ROM) of the equation, as well as computing the adjoint variables (see later).
The various scripts illustrate the use of the library:
script_optimal_control.py
- optimal control: it identifies the forcing so the solution matches a criterionscript_assimilation.py
- data assimilation (model and/or CI)script_advection.py
- advection equationscript_test_accuracy.py
- test the accuracy of the solverscript_unsteady_bc.py
- illlustrates unsteady boundary conditionsscript_burgers_inviscid.py
- inviscid burger equationscript_viscous_burgers.py
- viscous burger equationscript_operators.py
- general equation
- pdes 1D
- pdes 2D/3D
- optimal control
- assimilation of parameters
- assimilation of initial conditions
- both
- dirichlet conditions (even time dependant)
- periodic bdc
- neumann (implemented but not tested)
- add adjoint
- add assimilation
- add optimal control
- Bug in OC:
are unstable on BC with dirichlet
- Unstable diffusivity in adjoint (solved by smoothering...)
- stencils for the pre-written adjoint adjoint ? That would accelerate a lot the code
- use of solver_tools instead of solver_matrix_tools when needed ?
- advection: instabilities in LUD when too stiff ? Flux limiter ?
- add
/
for KdW or Kuramoto–Sivashinsky equations ?
- but instable like diffusivity
- better presentation here
- Remove useless scripts and improve compatibility between solver_tools and solver_matrix_tools
LUD is the Linear Upwind Differencing scheme. It has significantly less dissipation than the regular upwind scheme. It is used to discretize, in space, the following advection-type operator:
does not have to be a constant: it can actually be
.
central difference scheme is used to discretize, in space, the following diffusion-type operator:
Let's use a simple example here, the Burger equation:
In the present approach, we chose to separate the time derivative operator from the spatial operators:
with and
. Clearly, these operators can be nonlinear.
The list of RHS operators
will be passed to
integration_forward
, in order to solve the pde.
We want to look at the problem depending on some parameters:
s.t. the physics:The physics is solved using the function
integration_forward
For instance, considering , and the inviscid Burgers' equation, we have
If the cost function means fitting the model on available data , then, one have:
We are also considering that the initial conditions are (potentially) related to the parameters q with:
and
do not have to be constructed. In practice, only their partial derivatives w.r.t.
and
will be needed.
For instance, if the initial conditions are actually the parameter :
We want to find the argmin of J. This formalism is usefull in numerous situations, e.g.:
- to identify the parameters of a model that will match some data
- some initial conditions that will reproduce as good as possible the provided data.
- optimal control, so that
will reach a given state or follow a given trajectory.
- more generally any user-defined constrains expressed in the form of
.
Identifying the minimum of the relies on the gradient of the functional with respect to the parameters:
.
Estimation of the gradient with finite differentiation is out of reach if the size of is large (if
is the size of
, then
evaluation of the system are needed).
For that, one can introduce the Lagrangian , associated with the two Lagrange parameters
and
(variables are dropped for visibility):
where is the transpose operator.
Naturally, both
and
are considered as variables.
The term in cannot be easily estimated. An integration by parts gives:
The term associated with can now be replaced.
Ordering terms leads to:
As and
are null by construction,
and
can be designed specifically to alleviate the computations.
Indeed, proper choices for and
allow to simplify the expression of
.
The choice of nullifies the term
.
can then be chosen as the solution of the so-called adjoint equation(see Ledimet 1986,Talagrand 1997):
This equation is solved using the function
integration_backward
To do so, linearize around
and replace
Finally, the Lagrange parameter is set so that it nullifies the component associated with
:
Then, computing , hence
, is achieved by the integration of:
This equation is solved using the function
gradient_q