Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement 3D model of dust emission #38

Closed
zonca opened this issue Jan 31, 2020 · 35 comments
Closed

Implement 3D model of dust emission #38

zonca opened this issue Jan 31, 2020 · 35 comments
Assignees

Comments

@zonca
Copy link
Member

zonca commented Jan 31, 2020

Got in touch with Ginés Martínez-Solaeche and Jacques Delabrouille about implementing the model: "A 3-D model of polarised dust emission in the Milky Way" published in https://arxiv.org/abs/1706.04162

Would like first to complete and merge #37 that we could use as a reference.

It would be nice to know:

  • is there already a Python implementation of the model?
  • would a frequency-based interpolation of existing maps be a suitable implementation?
  • does the model have any specific requirements? e.g. needs to load huge input datasets
@delabrou
Copy link

There is no python implementation yet.
A frequency-based interpolation of existing maps would be a suitable implementation. I produced (hundreds of) maps at frequencies ranging from 1 GHz to 1 THz last year for CMB-S4, with a small Delta_nu, and uploaded them at NERSC, but I cannot find where now. One could interpolate those. They are at nside=512 if I remember correctly.
Alternatively, we could provide the model itself, i.e. the output of the code which is 6 polarised maps of dust emissions in the 6 layers at 353 GHz, six maps of spectral indices, and six maps of temperature. Then the code would have to implement the MBB laws to compute the emission at any frequency, and add-up the six contributions.
There is no specific requirement that I foresee could cause problems.

@zonca
Copy link
Member Author

zonca commented Mar 24, 2020

Thanks @delabrou,
I would rather skip the intepolation, also because we want nside 8192.

We have these models for dust:
https://github.com/healpy/pysm/blob/master/pysm/models/dust.py

Do you think it would be feasible to modify one of the classes in dust.py to implement your model?
We would want inputs both at nside 512 and nside 8192.

@delabrou
Copy link

delabrou commented Apr 8, 2020

I think it might be doable to modify one of the classes. What we need is just a sum of six blackbodies, i.e. six dust blackbody components.
inputs a side 512 should be no problem. We can just give maps of amplitude (IQU), temperature and spectral index for each of the six dust components, corresponding to one realisation of the model. 8192 is challenging. For now I can do 2048 and then we upgrade? Anyway at very small scales point source polarisation will dominate...

@zonca
Copy link
Member Author

zonca commented Apr 8, 2020

if you can only do nside 2048 no problem, we can use those, PySM will ud_grade them to the requested nside.

so you could start working on a pull request to this repository which adds a class to dust.py (or better create another file) and then add documentation and link to paper on https://github.com/healpy/pysm/blob/master/docs/models.rst

@zonca
Copy link
Member Author

zonca commented Apr 29, 2020

@delabrou any updates on this?

@delabrou
Copy link

I got back to this recently, I found a bug in my code when Nside is different from 512. I am fixing it now and checking everything. More later.

@zonca
Copy link
Member Author

zonca commented Jul 12, 2020

I got back to this recently, I found a bug in my code when Nside is different from 512. I am fixing it now and checking everything. More later.

@delabrou any progress?

@zonca
Copy link
Member Author

zonca commented Apr 3, 2021

from @delabrou:

https://www.apc.univ-paris7.fr/Downloads/Planck/PSM-Data/MKD-MODEL-PySM-2048/
The components/thermaldust/ sub-directory contains maps of IQU, Temperature, and spectral index in the various layers, with straightforward names. In the skyinbands/HFI folder I have observations of the model in three significant Planck frequency bands, for reference and checks.
This specific realization of the model can then be easily implemented in PySM3, by simply summing-up the contribution of all 6 layers, scaled each with MBBs with the corresponding parameters.

There are also nside 256 versions at https://www.apc.univ-paris7.fr/Downloads/Planck/PSM-Data/MKD-MODEL-PanExGal/

@brandonshensley
Copy link
Contributor

In case it is useful, here is a super simple notebook for evaluating the MKD model at any frequency using the files @delabrou provided.

@delabrou
Copy link

delabrou commented Sep 22, 2021 via email

@zonca zonca self-assigned this Oct 5, 2021
@zonca
Copy link
Member Author

zonca commented Oct 5, 2021

I will create a single model d12, that uses the N_Side 2048 maps and the standard MBB scaling as in @brandonshensley 's notebook.

@zonca
Copy link
Member Author

zonca commented Oct 6, 2021

ok, started to work on this in #87

@zonca
Copy link
Member Author

zonca commented Oct 6, 2021

@brandonshensley can you please run your notebook starting from the maps at nside 2048 but ud_grading all of them to nside 8, then evaluate the emission at 100, 353 and 857 GHz? I'll use them as test cases for the model. Are templates in uK_RJ and T_dust in u.K?

@zonca
Copy link
Member Author

zonca commented Oct 6, 2021

copied templates to NERSC: https://portal.nersc.gov/project/cmb/pysm-data/mkd_dust/

@zonca
Copy link
Member Author

zonca commented Oct 7, 2021

@delabrou we discussed about this model at the call today.

I was wondering if it would be possible for you to run again your model at 2048 but extract all the parameters that are necessary to fill-in the small scales, and then we could have small scales generated inside PySM on the fly at the required resolution.

For example this is what we want to implement for one of GNILC models: see https://gist.github.com/zonca/fa6e78df72230028ac027cf0e338aa74

I save:

  • Large scale Alms
  • modulation maps in amplitude and polarization
  • The C_ell of the small scales (up to ell of 2*8192, just need to extrapolate the small scales power spectrum you already have, this also includes suppression of large scales)

Then use those inputs to generate small scales on the fly.

Do you think we could do something similar?

@delabrou
Copy link

delabrou commented Oct 7, 2021 via email

@brandonshensley
Copy link
Contributor

Hi @zonca, I updated my notebook to include an NSIDE=8 test. All units are MJy/sr (which are the units of the input amplitude maps from @delabrou). Happy to make changes and provide my copies of the output maps if needed. Let me know if this is what you had in mind.

@zonca
Copy link
Member Author

zonca commented Oct 13, 2021

yes, @brandonshensley, can you please share your nside8 maps?

@zonca
Copy link
Member Author

zonca commented Oct 13, 2021

yes, @brandonshensley, can you please share your nside8 maps?

nevermind, I just ran your notebook

@zonca
Copy link
Member Author

zonca commented Oct 13, 2021

@delabrou @brandonshensley while debugging the implementation I found something strange, the last 3 layers of the maps at N_side 2048 are zero:

>>> for i in range(1 , nlayers+1):
             print(i, hp.read_map(('thermaldust/thermaldust_ampl%d.fits' % i))[:5])
1 [0.05139871 0.05129357 0.05620123 0.05712916 0.05806479]
2 [0.01265912 0.01180437 0.01253842 0.01286273 0.01333025]
3 [0.00046295 0.00057229 0.00038549 0.00088876 0.00075269]
4 [0. 0. 0. 0. 0.]
5 [0. 0. 0. 0. 0.]
6 [0. 0. 0. 0. 0.]

@delabrou
Copy link

delabrou commented Oct 13, 2021 via email

@zonca
Copy link
Member Author

zonca commented Oct 13, 2021

thanks @delabrou, now I understand, maps are fine, sorry, it seemed like a bug to have all those identically zeros.

@zonca
Copy link
Member Author

zonca commented Oct 13, 2021

I fixed some issues, I still get a discrepancy of a few percent, too big to be due to rounding errors. I am worried about the unit conversions.

So in @brandonshensley 's notebook if I consider pixel 0 of the first layer and scale it to 857 GHz:

353GHz 0.06112886862365485 MJ/sr
the mbb scaling factor is: 10.402229249394816
857 GHz 0.6358765051793955 MJ/sr

In my code:

first the input is converted to uK_RJ (checked this conversion and it is fine)

15.967064316226011 uK_RJ

(freq / freq_ref) ** (mbb_index[i_layer][0] - 2.0)
0.7269558731393342

blackbody_ratio(freq, freq_ref, mbb_temperature[i_layer])[0]
2.4865919382728716

(Pdb) p freq, freq_ref
(857.0, 353.0)

the output of just the first layer is 28.862746358454327 uK_RJ

bandpass_unit_conversion(freq, weights, self.output_unit)
<Quantity [0.02256491] MJy / (sr uK_RJ)>


<Quantity [0.65128514] MJy / (sr uK_RJ)>

it is about 4% off.

@brandonshensley
Copy link
Contributor

Hi @zonca, trying to get on the same page. For my NSIDE=8 maps, I have:

print(I_nside8[0,0], beta_nside8[0,0], tdust_nside8[0,0])
>> 0.06103307592456986 1.8037480867642444 18.507827861700207

which doesn't quite match the numbers you quote above? Am I looking at the right thing?

@zonca
Copy link
Member Author

zonca commented Oct 13, 2021

@brandonshensley I am loading the nside 2048 maps and ud_grading them to nside 8

@zonca
Copy link
Member Author

zonca commented Oct 13, 2021

T = 17.66577373725886
beta = 1.6404731832553807

@brandonshensley
Copy link
Contributor

Ah ok, I see now that I am working with the NSIDE=512 template maps. Let me try numerics by hand. =)

@brandonshensley
Copy link
Contributor

Using that temperature, I recover from WolframAlpha your mbb factor of 2.4865919382728716 to seven digits to the right of the decimal. Sticking to MJy/sr and a 353 GHz flux density of 0.06112886, I arrive at 0.651285 MJy/sr at 857 GHz, which is what you quote above. Not sure if that helps.

@zonca
Copy link
Member Author

zonca commented Oct 13, 2021

Ok so doing the scaling in UK RJ is 4% different than using MJy/sr? Is it reasonable?

@brandonshensley
Copy link
Contributor

No, they really should be identical. 0.651285 MJy/sr at 857 GHz is what I get doing it by hand, so believe that is correct. Is the 0.635876 MJy/sr you quote above using my code? If so, there must be something amiss in my implementation (hard coded units?).

@zonca
Copy link
Member Author

zonca commented Oct 13, 2021 via email

@brandonshensley
Copy link
Contributor

Mea culpa, I swapped nu and T in my B_nu function:

B_nu(temp, nu)/B_nu(temp,nu_ref)

instead of

B_nu(nu, temp)/B_nu(nu_ref, temp)

That should fix it.

@zonca
Copy link
Member Author

zonca commented Oct 14, 2021

ok, now tests pass, the model at 2048 is ready. #87

@zonca
Copy link
Member Author

zonca commented Oct 14, 2021

d12 preset definition is in dc939dc

@zonca
Copy link
Member Author

zonca commented Feb 23, 2022

completed in #87

@zonca zonca closed this as completed Feb 23, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants