-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
libsharp discontinued, alternatives for MPI smoothing? #92
Comments
If you are currently using |
We are doing foreground modelling with PySM at N_side 8192, each map in memory in double precision just one component is 6.5 GB, the more demanding model has 6 layers, each with 1 IQU map and 2 auxiliary, so it's about ~200 GB just the inputs. That is a single model, in a complete simulation we would have 4 galactic models, 3 extragalactic, 2 or 3 CMB components. It's not doable yet on standard compute nodes. And we might need N_side 16382. |
Thanks for the data, that is very helpful!
This sould be the fastest way of carrying out this operation. Of course the additional communication makes things more complicated, so I can perfectly understand if this is not your preferred solution. However, doing an SHT, even if it is nside=8192, lmax=16384, with hybrid MPI/OpenMP (or even wose with pure MPI) parallelization over >100 threads is quite inefficient. If you have the chance to do several SHTs simultaneously on fewer threads each, this would be preferable. |
PS. Out of curiosity, if you go to map space for intermediate computations (not for the final data product), have you considered using grids other than Healpix, e.g. Gauss-Legendre? Depending on the operations you have to carry out, this could be advantageous, since the SHTs are potentially significantly faster and also exact. |
@mreineck I don't know much about Gauss-Legendre, do you have a reference I could look into? |
If your band limit is lmax, the minimal corresponding Gauss-Legendre grid has [Edit: fixed the |
thanks @mreineck, do you have an example of using: https://mtr.pages.mpcdf.de/ducc/sht.html#ducc0.sht.experimental.synthesis to transform a set of IQU Alms into a map in GL pixels and then use analysis to go back to Alms? It's really hard to understand how to use it from the API reference. |
The good thing about the GL grid is that it is "regular", i.e. it
Its usage is demonstrated, e.g., in https://gitlab.mpcdf.mpg.de/mtr/ducc/-/blob/ducc0/python/demos/sht_demo.py. |
Here it is. (Sorry, Github doesn't appear to let me attach it.)
|
discussion about libsharp is completed, discussion about Gauss-Legendre pixelization in #91 |
PySM relies on
libsharp
to smooth maps over MPI.Libsharp is not maintained anymore: https://gitlab.mpcdf.mpg.de/mtr/libsharp
Is anyone using PySM over MPI? Should we simplify the code and remove MPI support? or find an alternative for
libsharp
(@mreineck suggestions?) ?@NicolettaK @giuspugl @bthorne93
The text was updated successfully, but these errors were encountered: