Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Postprocessing the solution to get better accuracy #2711

Open
bangerth opened this issue Nov 27, 2018 · 0 comments
Open

Postprocessing the solution to get better accuracy #2711

bangerth opened this issue Nov 27, 2018 · 0 comments

Comments

@bangerth
Copy link
Contributor

As discussed in the telecon this morning, there are ways to postprocess a solution to make it more accurate. The idea is to take the computed solution, and do some (local) postprocessing to get something that is more accurate -- typically in a higher order polynomial space.

The classical example for this is the ZZ (Zienkiewicz-Zhu) "gradient recovery" method, though I am quite sure that there are other "recovery" methods also for the values (instead of the gradient) of the solution. The ZZ recovery method provides a better approximation g_h for the gradient of the solution than just evaluating grad u_h at individual points. I don't know the literature in this area well, but a starting point for a deeper search may be this publication here: https://pdfs.semanticscholar.org/8d94/8025f2a7628c11dd45877f8525d8957d869d.pdf
Apparently, there is an excellent survey paper by my PhD adviser Rolf Rannacher:

Rannacher, Rolf. "Extrapolation techniques in the finite element method: A survey." Helsinki Univ. of Technology, Proceedings of the Summer School in Numerical Analysis at Helsinki 1987 p 80-113(SEE N 89-12209 03-59) (1988).

Unfortunately, I can't seem to find an online source for it. It would probably be useful to find this somewhere.

There are other methods, such as Richardson extrapolation. An example is here (by another one of my early mentors, Wolfgang Wendland): http://publications.lib.chalmers.se/records/fulltext/103183/local_103183.pdf, or this: https://www.worldscientific.com/doi/abs/10.1142/9789812792686_0013 , or this for Stokes: https://link.springer.com/article/10.1007/s10444-004-1089-0 . There are many others that are about the mechanics of Richardson extrapolation. The problem with these is that they require the solution on a hierarchy of subsequent meshes where one wants to extrapolate from meshes at refinement level N-1 and N to what one would get if one were to compute on mesh refinement N+1 (without actually doing that, because it's too complicated). This does not fit well into the workflow of ASPECT, at the very least until we have a working multigrid implementation running.

All of these techniques fundamentally require that the solution is sufficiently smooth -- say, H^2 instead of just H1. Otherwise, the local error expansion in terms of the mesh size won't work. They also require that the mesh size is "sufficiently small" so that the higher order terms of the error expansion become negligible. Given that we are perpetually under-resolved and that we have jumping coefficients due to the material nonlinearity, I don't know that either of these two pre-conditions are satisfied in practice. But it would probably be a useful exercise to have a student make this (a part of) their PhD thesis, for example, and really investigate whether these techniques can be useful for our purposes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant