Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
AlexeyVasilyev240597 committed Oct 21, 2023
1 parent d11cbf2 commit 4c4924d
Showing 1 changed file with 18 additions and 4 deletions.
22 changes: 18 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,13 +17,18 @@ The current release only supports the problem in __the rectangular domain__.
## Explicit Residual Method in a 1D Problem
### Problem Formulation
Let $u(x) \in H^1\left([0, 1]\right)$ is the solution of the problem
$$ \left( \alpha u' \right)' + f = 0,\, x \in \left[0, 1\right], $$
$$ u(0) = U_0,\, u(1) = U_1, $$
where $\alpha(x) \ge \alpha_0 > 0,\, f \in L^2\left([0, 1]\right)$. <br>

$$ \left( \alpha u' \right)' + f = 0, x \in \left[0, 1\right], $$

$$ u(0) = U_0, u(1) = U_1, $$

where $\alpha(x) \ge \alpha_0 > 0, f \in L^2\left([0, 1]\right)$. <br>

Code block _1D_ numerically shows the validity of the inequality relating the error of the Galerkin approximation $u_h$ to the residual [[1]](#1)

$$ \alpha_0\Vert u' - u_h' \Vert^2 \le
\left(\frac{h}{\pi}\right)^2 \Vert (\alpha u_h')' + f \Vert^2, $$

where $\Vert \cdot \Vert$ is norm in $L^2\left([0, 1]\right)$. <br>
<!-- TODO: uncomment when the report will be published. -->
<!-- The essay [[2]](#2) also has detals and a numerical example. -->
Expand Down Expand Up @@ -57,13 +62,17 @@ An example below shows two histograms for dividing the interval $[0, 1]$ into th

### Problem Formulation
The function $u \in \mathring{H}^1(\Omega)$ is the solution to the problem

$$ \Delta u + f = 0 \textrm{ in } \Omega \subset \mathbb{R}^2, $$

$$u \vert_{\partial \Omega} = 0 ,$$

where function $f \in L^2(\Omega)$.

Let $u_h$ here also be the Galerkin approximation, it was calculated on the meshgrid of finite elements $\{ T_i \}_{i=1}^N$.

The indicator of the error is a field constructed by a vector function $\boldsymbol{y}$ and defined on each finite element $T_i$ as

$$ \Vert \nabla u_h - \boldsymbol{y} \Vert_i^2 $$

Two indicators are described below.
Expand All @@ -75,13 +84,18 @@ This method uses gradient of a numerical solution ($\nabla u_h$) and involves av

### Minimizing the Majorant $M_+$
The approximation error estimate is [[1]](#1)

$$ \Vert \nabla (u - u_h) \Vert^2 \le M_+^2(u_h, \boldsymbol{y}, \beta), \quad \forall \boldsymbol{y} \in H(\Omega, \textrm{div}), \beta > 0, $$

where $\Vert \cdot \Vert$ is the norm in $L^2(\Omega, \mathbb{R}^2)$ and the majorant by definition is

$$ M_+^2(v, \boldsymbol{y}, \beta) = (1+\beta) \int\limits_{\Omega} \vert \nabla v - \boldsymbol{y} \vert^2 dx + \left(1+\frac{1}{\beta}\right) C_F^2 \int\limits_{\Omega} \vert \textrm{div} \boldsymbol{y} + f \vert^2 dx $$

(here $C_F$ is the Friedrichs' constant).

The vector function $\boldsymbol{y}^*$, which is used to construct the $M_+$-indicator, is
$$ \boldsymbol{y}^* = \underset{{\boldsymbol{y}}, \beta > 0}{\textrm{argmin}} \, M_+^2(u_h, \boldsymbol{y},\beta). $$

$$ \boldsymbol{y}^* = \underset{{\boldsymbol{y}}, \beta > 0}{\textrm{argmin }} M_+^2(u_h, \boldsymbol{y},\beta). $$

### How to use
The user must upload the parameters of his task into the workspace (can be exported from __pdeModeler__) and pass them to the constructor of Indicator. <br>
Expand Down

0 comments on commit 4c4924d

Please sign in to comment.