-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Algebraic multigrid #1803
Algebraic multigrid #1803
Conversation
Added a new MMS test for 2d Laplacian solvers, but on a 3d circular-cross-section-tokamak grid.
…ntegrated-test_next-merge
Copied from test-multigrid_laplace with laplace:type changed.
Major change is to correct use of 'x' as poloidal flux (starting from inner edge of grid) vs. [0,1]. Fix calculation of dx/dy/dz. Were previously, incorrectly, using Lx as a box length in all directions. Should be grid width of psiwidth in x-direction and 2pi in y- and z-directions. Add psiN0 argument of SimpleTokamak to choose where in minor radius to start the grid.
This is currently not working in mms_alternate.py because the symbolic integration to calculate zShift fails.
Useful to reduce size of z-direction
Makes grid closer to square, easier for iterative Laplace solvers.
This option may help to normalize matrix coefficients in iterative Laplacian solvers, aiding convergence (at least in simple tests).
Normalize length scales to make gradients closer to order unity. Change make q and shear small (q=.5+.1*psiN instead of q=2+psiN**2). The test now passes when using the iterative multigrid solver.
…gebraic_multigrid
This reverts commit 5451af4.
Was applying petscamg matrix and BOUT++ operators to a field which did not have a Dirichlet boundary correctly set numerically. This was causing a discrepancy in the last grid cell.
Can change this to change the size of the radial domain
tests/MMS/laplace3d/runtest now (i) takes command line argument to specify the maximum number of processors to use and (ii) saves plots as pdf instead of showing interactively.
If Christoffel symbols and G1/G2/G3 are present in the input (grid file or [mesh] section of BOUT.inp) then use these instead of calculating from the metric with finite differences.
Assuming the Travis backlog clears overnight, the last two split PRs will go into next tomorrow, then I'll merge next in here and fix the conflicts. @johnomotani I can't see |
👍 @ZedThree I think |
* next: (254 commits) Return current position from MsgStack::push for OpenMP fix Remove unused macro in msg_stack header Temporary fix for MsgStack thread safety: only one thread Fix typo in example/performance/communications Make local variables const and reduce scope in XZ interpolations Add region arguments for methods of Interpolation classes Make build_and_log build the test for CMake Pass method and region arguments to D2DXDY in Laplace(Field2D) Simplify boundary loops in LaplaceXY::precon() Copy parallel slices in Field3D copy ctor Add Field3D move assignment operator Fix infinite recursion in identity transform calcParallelSlices fix for running unit-test as root mention that *_ROOT is optional Don't increase iterations before GMRES restart in test-laplacexy* Fix indexing in LaplaceXY preconditioner Better comments in LaplaceXY::solveFiniteDifference() Keep corner boundary cells from solution for finite-difference LaplaceXY Correct comments in finite-difference LaplaceXY preallocation Remove communications of corner cells in LaplaceXY ...
@johnomotani Dealing with the conflicts went pretty smoothly -- just keep the version from EDIT: I can reproduce the failing test on my machine now, so hopefully tomorrow I'll get a bit closer to working out what's causing it |
This reverts commit 0589ce6.
Keep the one from |
Thanks to the magic of ASan, I think I've found the issue:
This is essentially because PETSc is not thread safe. A minimal fix is: @@ -275,7 +279,7 @@ void LaplacePetsc3dAmg::updateMatrix3D() {
// Set up the matrix for the internal points on the grid.
// Boundary conditions were set in the constructor.
- BOUT_FOR(l, indexer->getRegionNobndry()) {
+ BOUT_FOR_SERIAL(l, indexer->getRegionNobndry()) {
// Index is called l for "location". It is not called i so as to
// avoid confusing it with the x-index.
@@ -352,7 +356,7 @@ void LaplacePetsc3dAmg::updateMatrix3D() {
// Must add these (rather than assign) so that elements used in
// interpolation don't overwrite each other.
- BOUT_FOR(l, indexer->getRegionNobndry()) {
+ BOUT_FOR_SERIAL(l, indexer->getRegionNobndry()) {
BoutReal C_df_dy = (coords->G2[l] - dJ_dy[l]/coords->J[l]);
if (issetD) {
C_df_dy *= D[l]; but really, we should only be using @johnomotani @cmacmackin How important is OpenMP to the performance here? There's a second issue here that that |
@ZedThree we haven't tested performance at all yet, but as PETSc isn't thread-safe I would not expect to be using OpenMP with this solver. I think the aim was just to make the BOUT++ part thread-safe, and then wrap the PETSc calls in Would it be enough to just put a |
I think even accesses to The tests are passing now so I'm going to go read through the changes again, apply some polish where needed, and then it's ready to merge. |
Switching However, it then requires an explicit call to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've got a bunch of changes for this, but they're stacking up. It might be best to just merge this now, and I'll open separate PRs for my changes. @johnomotani thoughts?
g_13 = 0. | ||
g_23 = Bt*hthe*Rxy/Bp | ||
|
||
\int_theta0^theta{nu dtheta} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be a comment?
g_13 = 0. | ||
g_23 = Bt*hthe*Rxy/Bp | ||
|
||
\int_theta0^theta{nu dtheta} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Comment?
I think that's a good idea. The solver does work and pass tests as-is, and it'll be easier to review changes in separate PRs. |
🎉 Thanks @cmacmackin @johnomotani ! |
This is my work on the Algebraic multigrid solver for the fully-3D Laplace inversion. It is currently passing all unit tests but these only cover simple Cartesian metrics on a single processor. I am awaiting integration tests to try it in more complex scenarios. There is a little bit of simple optimisation still required, prior to merging, and possibly some tidying up hear and there.
There is also the implementation of an AMG solver for the 2D Laplace inversion, implemented by a previous developer. I have not been involved with this code and can not vouch for its behaviour.
I have implemented a new interface to PETSc (see issue #1771) which allows vectors and arrays to be set using the local BOUT++ index objects for that processor. The wrapper will automatically handle conversions to global indexing for PETSc. It also handles the interpolation weights needed when calculating along-field values in the y-direction. It has been extensively unit tested. The unit tests for the index conversion ability required some modest refactoring of the Mesh class, making some methods virtual which weren't previously and adding methods which wrap some MPI calls (so that they can be overridden for testing purposes). This allows the parallel behaviour of the indexing to be tested while running the unit tests in serial.
The way which @johnomotani has changed the D2DXDY made my unit tests of the Laplace solver fail as the forward version of the Laplace operator I had defined could no longer work. See my email to John below:
He replied
Ultimately, the resolution which I settled on was to add a
dfdy_boundary_condition
argument to the Laplace routines. I also added an argument to D2DXDY calleddfdy_region
, defining what region to take DDY on. It defaults to an empty string, which indicates that this should be the same region as the D2DXDY operation as a whole. I added the argumentdfdy_dy_region
region to the Laplace routines, which can then be passed to D2DXDY.None of this changes any default behaviour. However, it allows me to make my call to the Laplace operator in my tests with the form
Laplace_perp(f, CELL_DEFAULT, "free", "RGN_NOY")
, which ensures I get the behaviour I need.I am of course willing to reconsider any of these changes to the codebase if people object to them.