-
Notifications
You must be signed in to change notification settings - Fork 194
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement Poisson solver based on integrated Green functions #4648
Conversation
All complex types are equal, but some complex types are more equal than others. -- George Orwell
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome 🚀 ✨
@aeriforme Are you able to reproduce the build the Sphinx documentation on your local computer? I wonder why it fails here and not on other PRs. |
The
Found it 🎉 Just a small typo, the bib file has invalid syntax. Will fix. |
I updated the checksum. It was a negligible change in |
#if (defined(WARPX_USE_PSATD) && defined(WARPX_DIM_3D)) | ||
// Use the Integrated Green Function solver (FFT) on the coarsest level if it was selected |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small update I would suggest:
- If FFTs were not compiled in, but
is_solver_multigrid=false
is passed to this function, then this silently falls back to a MLGM solve for the lowest level. This should raise a runtime error, because we cannot fulfill the API/user request.- An alternative, throwing a warning and "loud" fallback to MLMG, is not ideal in practice, because boundaries/padding & resolution would be quite different.
- I realized the
is_solver_multigrid
name is a but confusing, because we do the IGF only on level 0 and an MLMG on all others in MR. Maybe we find a good alternative name, e.g.,is_solver_igf_on_lev0
(most explicit/good?) oris_mlmg_all_levels
(kinda like now).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Yes, we need the check in ABLASTR directly because that is a library that we also call from ImpactX 🙏
This uses the method from:
https://journals.aps.org/prab/abstract/10.1103/PhysRevSTAB.9.044204
https://journals.aps.org/prab/pdf/10.1103/PhysRevSTAB.10.129901
Because this uses the same FFT infrastructure as PSATD, right now, the code needs to be compiled with:
-DWarpX_PSATD=ON
.Also: since the solver uses a global FFT, right now only one MPI rank participates in the FFT solve (but this MPI rank can still use a full GPU) ; this could be improved later by using heFFTe. The rest of the PIC operations (charge deposition, gather, etc.) are still performed by the different MPI ranks in parallel.
Comparison with Basseti-Erskine formula (see the test script in this PR):
TODO
ParallelCopy
)grid_type = collocated
(this could be done with the regular Poisson solver instead of the relativistic one, so that we can outputphi
)warpx.poisson_solver
that can be set tomultigrid
,fft_based
, etc. + how to handle compatibility with boundary conditions?if electromagnetic simulation with PML +
initialize_self_fields
, then use IGF which implies open boundariesIGF only compatible with
open
and viceversamultigrid not compatible with
open
more
more
Abort if someone tries to use mesh refinementIf mesh refinement is on and the user has selected the IGF solver, then use IGF only on the coarsest level (lev = 0
) and MLMG in the refined patchesphi
outside of the global domain (useful forgrid_type = collocated
)Save the FFT plans in a permanent object, so that we do not need to recompute them.Probably not needed. From the timers, it seems that the creation and destruction of FFT plan does not take any significant time.IntegratedGreenFunctionSolver.H
, inablastr
By calling the function in
VectorPoissonSolver.H
(for the magnetostatic solver)ParallelCopy
error that appears inDEBUG
mode: