Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding an adaptive, arbitrary order, Adams-Bashforth solver #1846

Merged
merged 23 commits into from
Jun 5, 2020

Conversation

d7919
Copy link
Member

@d7919 d7919 commented Nov 22, 2019

Currently performs similarly to other adaptive explicit solvers such as
rkgeneric. Has the advantage that in certain cases we need the minimal
number of rhs calls per step possible.

Could possibly be modified to add adaptive order as well as adaptive
timestep. This may help improve performance in cases where solver
stability can be limiting.

Code may well need some restructuring but I'd thought I'd create the PR now so we can have the discussion here.

This is a candidate for backporting and as it is essentially just new files (and changes to makefiles) it was recommended that I branch from master, and hence my PR is currently targeting master v4.4.0-alpha. This should be easy to apply to other branches.

Currently performs similarly to other adaptive explicit solvers such as
rkgeneric. Has the advantage that in certain cases we need the minimal
number of rhs calls per step possible.

Could be modified to add adaptive order as well as adaptive
timestep. This may help improve performance in cases where solver
stability can be limiting.
@d7919 d7919 added proposal A code/feature outline proposal backport candidiate Does not break backward compatibility, so can be back-ported to v4.4 labels Nov 22, 2019
@d7919
Copy link
Member Author

d7919 commented Nov 22, 2019

The Newton-Cotes integration routines are very repetitive so I'd hope we could make the code a bit more generic and just pass in the parameters but my initial attempts at doing this didn't work when I wanted to make everything constexpr.

@ZedThree
Copy link
Member

I've pushed v4.4.0-alpha, which this can go into instead of master

Fairly crude approx that costs one extra rhs per successful internal
step. For the test-drift-instability test case enabling this options
leads to a factor two reduction in runtime for the longest running
case (but this still remains an order of magnitude longer than with pvode).
@d7919 d7919 changed the base branch from master to v4.4.0-alpha November 25, 2019 19:38
@d7919
Copy link
Member Author

d7919 commented Nov 25, 2019

Changed target to v4.4.0-alpha.

I've also added an initial attempt at adaptive order - it could probably be more efficient but still seems to help for some test cases.

@d7919
Copy link
Member Author

d7919 commented Nov 25, 2019

Currently the adaptive order has unintended behaviour in that as soon as one internal timesteps flags that the order should be lowered we do not increase the order for the remainder of the output step. On fixing this I saw that total time actually increased, if I limited the maximum order to 2 (essentially disabling the adaptive order code) I got about the same time as with this "broken" version so I think the majority of the performance gain was essentially just coming from limiting the order to 2. With the fixed version I do still find a decent saving for the test case when using a maximum order of 5 and enabling adaptive order (72s vs 45s, the "broken" version is about 36s). Looking into this a bit more it seems that with the fixed adaptive order approach there are more failed internal steps, likely due to the smaller regions of stability as the order increases (and we allow more time at higher order in the fixed version). This explains some of the increase in runtime but not all of it (maybe 10-20% of it). The rest must come from an increase in the number of right hand side calls due to an average smaller dt. We could perhaps be more aggressive in how we pick the timestep but in practice this seems to generally make things worse.

We currently unconditionally increase the order up to the maximum order unless we've just decided to use the lower order method, perhaps we should be more conservative about this and just increase the order when the timestep gets too small with the current order (or similar).

@d7919
Copy link
Member Author

d7919 commented Nov 26, 2019

Testing on the interchange test case the current AB solver seems to fair much worse than rkgeneric with the cash-karp scheme. With AB the test case fails due to exceeding MXSTEP whilst rkgeneric passes the test with just 7 rhs per output time. One area of difference is in the definition of the error, for rkgeneric it's err = Average<abs(accurate-approx)/(abs(accurate)+abs(approx)+atol)>, whilst for AB it's err = Max<abs(accurate-approx)>. Changing the AB definition (Max->Avg and the internal calculation), tweaks where the AB case fails but it still consistently fails. Trying AB non-adaptive I can get the test case to complete but even with a timestep 1/100th of that used by rkgeneric the test still doesn't pass. Interestingly the reported test result does not depend on the order used if the timestep is small (1.0), but it does if the step is bigger (100.0, and interestingly the reported error is smaller).

@d7919
Copy link
Member Author

d7919 commented Nov 26, 2019

Ah, it seems there was a fairly significant bug -- the state vector was only being put into the fields at the end of output steps and inside the adaptive algorithm. This means that for adaptive runs the derivative used at the current time was actually lagging behind by half a time step and for non-adaptive runs it was lagging behind by a whole output step. Fixing this makes the interchange test pass and speeds up the drift test case (down to 28s, c.f. 64s for Cash-Karp) whilst maintaining accuracy. I guess this also highlights that our current tests aren't that sensitive to solvers doing exactly the correct thing.

Also be a little more aggressive with our timestep changes
Now we don't actually calculate the rhs again at half time step point
with the lower order. This saves one rhs call per successful internal
timestep when adaptive_order is true.
@d7919
Copy link
Member Author

d7919 commented Nov 27, 2019

Wishlist for future work:

  1. Adaptive order for "fixed" timestep
  2. Adaptive order for failed internal timesteps (currently only consider changing order if internal timestep successful).
  3. More user control of timestep changes
  4. Tidy up the Newton-Cotes integration to reduce duplication.
  5. More available diagnostics to track order changes, failed steps etc.
  6. Explore adaptive adaptivity -- primary cost is typically in rhs, which we have to call once per internal timestep plus once more for each attempt at the internal timestep as a part of the error check. We may be able to get away with only calculating the error every other step or similar. Alternatively we may be able to find other ways of doing the error calculation (say interpolating the full step state array to the halfway point and comparing there instead).

* next: (953 commits)
  Add paragraph on setting staggered location for Laplacian solvers
  Update info on allowed staggering in derivatives
  Fix reference to sec-iterating
  Add MUMPS removal to list of breaking changes
  Typofix
  fix typo
  Add dash as travis job
  Avoid bashism += in configure
  Fix formatting of 'Obtaining BOUT++' section heading
  Remove MUMPS
  prefer PETSc over petsc with pkg-config
  Find PETSc with pkg-conf using configure
  ensure quiet script isn't to verbose on error
  Don't use old-style-cast
  Adjust field compat messages for fmt
  Better debug messages for fields compatibility
  fix requires in options-netcdf test
  improve error messages
  Ensure ffts are OpenMP thread safe
  Improve CPU detection in run_wrapper
  ...
Avoids else-after-break and reduces indentation
- "implicit bools"
- extra semicolons
- sign comparison
- braced conditionals
@ZedThree ZedThree changed the base branch from v4.4.0-alpha to next May 28, 2020 15:40
ZedThree
ZedThree previously approved these changes May 28, 2020
@ZedThree
Copy link
Member

Sorry, I had already merged in next before I realised this was targeting v4.4

@d7919
Copy link
Member Author

d7919 commented Jun 5, 2020

Changes look good to me, thanks for tidying this.

@ZedThree ZedThree merged commit e5f57f4 into next Jun 5, 2020
@ZedThree ZedThree deleted the feature_add_adapative_adams_bashforth_solver branch June 5, 2020 15:49
@ZedThree
Copy link
Member

ZedThree commented Jun 5, 2020

This is nice and self-contained and could be back-ported

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport candidiate Does not break backward compatibility, so can be back-ported to v4.4 proposal A code/feature outline proposal
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants