-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some time integrators do not work as expected #12347
Comments
Just wanted to note that a modified version of @YaqiWang's input file that solves the same problem without involving the "stateful" AuxKernel So I think the issue is definitely traceable to
|
I added a print statement at the top of
I am now looking more closely at the |
I think this class might be the problem. After putting a print statement in this class, I see:
every time. This corresponds to the dt I'm setting in the input file, but I don't think it's correct for multistage For this |
I tried to save old time mass matrix in Crank-Nicolson integrator in the above commit and hope to restore the convergence even we have time dependent quantities in time kernel, but failed to make it work with mesh adaptation. Must be missing something. I think the idea should be ok. Anyone can see any obvious issue in my commit? |
The idea is based on @jwpeterson 's thesis |
Hmm... this probably isn't surprising. The matrix you saved is now the wrong size if you subsequently adapted the mesh. The matrix would need to be "projected" onto the new mesh, similarly to the way vectors are projected, but projecting matrices isn't really a thing... |
I think when mesh is adapted, equation system reinitialization will reinit all the matrix including the one I added. It also project the solution onto the new mesh. So I can call computeJacobian to evaluate the Jacobian for the new mesh. But somehow the newly calculated Jacobian, which should be the same as the Jacobian on the new mesh in the new time step for this particular problem, does not give the time residual when multiplied with udot. |
Oh, that may be right, if you added it to the system with an Before you get too far down this path though, remember that it still won't necessarily give you 2nd-order convergence (Eq. B-2.8) unless the dependence of the mass matrix on time is fairly weak (Eq. B-2.9). |
Oh. @jwpeterson do you have any experience on handling the time dependent stabilization parameter in SUPG? I guess, I will just stay with Euler scheme for easy life... |
Ok, I will stop working on this. I cannot figure out why assembling time matrix and multiplying it with udot, does not give the same the same time residual with mesh being adapted. Updated my commit to make the issue more clearer. |
You mean with regard to verifying second-order accuracy in time? I have not looked at it in detail in MOOSE-base codes, as first-order convergence or just time-stepping to steady state was usually good enough for my purposes. In traditional libMesh codes or the FEMSystem stuff, it is possible to assemble the different terms at different points in time, and I believe I used that to implement the version of Crank-Nicolson in (B-2.16), but that was back during my dissertation work... |
Yes. I will just use Euler or our current Crank-Nicolson. MOOSE Crank-Nicolson is not wrong though just could potentially lose convergence order. Thanks. |
Just to summarize my findings regarding the issues with combining
Since |
The reason that the In the explicit case, the problem is that the
|
I'm not really sure how to fix all these issues. I think it's clear that One idea is to split its functionality into 2 classes, one of which can compute \int_{t_n}^{t_alpha} f(t) dt for any t_alpha within the current step, but doesn't keep a running sum and so doesn't get messed up by |
Looks like the root cause is the independent time integration in aux kernels, which users are responsible for, and the time integrator. I am not even sure if it is possible to have a consistent time integration in aux kernel with the time integrator. |
Hi @YaqiWang, yes we were discussing this over here as well, and agree with you. Our current idea is that one probably has to use the full "backup & restore" capability (like what is used in MultiApps) to get the multistage In the meantime, I will be adding a |
Sad part is that in Rattlesnake, we often use aux kernels to evaluate delayed neutron precursors since they do not have diffusion or convection. Also we often treat temperature equation without conduction (diffusion) thus use aux kernel to evaluate the temperature. Things can get more complicated in multiphysics. I remembered I once did coupled tensor mechanics and radiation transport, where I was using MOOSE time integrators and using Newmark for tensor mechanics time kernels. We possible just need to disable |
These TimeIntegrators are known not to work with Materials, AuxKernels, etc. that have state (e.g. VariableTimeIntegrationAux) because they either: * Call advanceState() internally, causing the state to be accumulated too frequently. * Don't call advanceState() internally, causing "old" and "older" and things like "_dt" to not be reliable. The following is a current list of multistage TimeIntegrators: * LStableDirk{2,3,4} * AStableDirk4 * ExplicitRK2 subclasses (Heun, ExplicitMidpoint, Ralston) * ExplicitTVDRK2 * ImplicitMidpoint See also, the discussion in idaholab#12347.
These TimeIntegrators are known not to work with Materials, AuxKernels, etc. that have state (e.g. VariableTimeIntegrationAux) because they either: * Call advanceState() internally, causing the state to be accumulated too frequently. * Don't call advanceState() internally, causing "old" and "older" and things like "_dt" to not be reliable. The following is a current list of multistage TimeIntegrators: * LStableDirk{2,3,4} * AStableDirk4 * ExplicitRK2 subclasses (Heun, ExplicitMidpoint, Ralston) * ExplicitTVDRK2 * ImplicitMidpoint See also, the discussion in idaholab#12347.
These TimeIntegrators are known not to work with Materials, AuxKernels, etc. that have state (e.g. VariableTimeIntegrationAux) because they either: * Call advanceState() internally, causing the state to be accumulated too frequently. * Don't call advanceState() internally, causing "old" and "older" and things like "_dt" to not be reliable. The following is a current list of multistage TimeIntegrators: * LStableDirk{2,3,4} * AStableDirk4 * ExplicitRK2 subclasses (Heun, ExplicitMidpoint, Ralston) * ExplicitTVDRK2 * ImplicitMidpoint See also, the discussion in idaholab#12347.
Maybe we should have an aux kernel like |
Rationale
I am able to create a simple test to show the unexpected behavior of time integrators. The test is small so I will just copy it here:
Solution of
u
ist*t/2
. No spatial discretization error should be seen. Thus the exactintu
is 0.5.Description
Backward Euler: error is
dt/2
, first order;Forward Euler: error is
dt/2
, first order;BDF2: error is about
3/4*dt^2
, second order;Crank-Nicolson: error is zero;
Explicit-midpoint: converges to wrong value 1;
DIRK: error is
dt/2
, first order;Explicit-tvd-rk-2: converges to wrong value 1.
Looks like BE, FE, BDF2, CN work as expected, but not the rest. I think these integrators cannot handle stateful aux variables properly. I did not try stateful material properties, but guess we will see similar issue.
Impact
Fix buggy time integrators.
The text was updated successfully, but these errors were encountered: