-
Notifications
You must be signed in to change notification settings - Fork 188
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement HpxNDMap.convolve() #3323
Conversation
Codecov Report
@@ Coverage Diff @@
## master #3323 +/- ##
==========================================
- Coverage 93.83% 93.78% -0.06%
==========================================
Files 144 144
Lines 18050 18198 +148
==========================================
+ Hits 16938 17067 +129
- Misses 1112 1131 +19
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks lot @LauraOlivera, I have left some first comments, I'll check out the functionality locally now...
gammapy/maps/hpxnd.py
Outdated
map : `HpxNDMap` | ||
Convolved map. | ||
""" | ||
if self.geom.width < 10*u.deg: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the number "10 deg" should be introduce as a class level variable like:
class HpxNDMap:
convolve_wcs_max_width
This allows users in general to overwrite it, in case they know what they are doing...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or maybe even better: alternatively we could just say all partial maps are supported and we make a statement in the docstring on the accuracy for larger maps >~10 deg
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. My idea was that ideally we would have this method (ok for small maps) and then a full-sky map one that involves stacking a partial map into a full-sky map if needed (which I expect to be slower, but if it isn't then I need to rethink things), and if you try to use convolve_wcs
on a partial map too large, you just get a warning suggesting that the other one is more suited or something. But it needs some more thought/work.
gammapy/maps/hpxnd.py
Outdated
""" | ||
if self.geom.width < 10*u.deg: | ||
# Project to WCS and convolve | ||
wcs_map = self.to_wcs(fill_nan = False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think .to_wcs()
takes a pre-computed mapping as well. There might be a performance improvement if the HpxToWcsMapping
is created outside and re-used for both cases i.e. .to_wcs()
as well as the later inverse operation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this even offers the possibly to adjust the WCS geometry used for the conversion and convolution, to the WCS geometry of the kernel, that is passed in. This eliminates the need to compute a matching kernel outside. I haven't thought this through completely, but I think it works...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, you are right! that is a very good point! I'll try that out
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It works but I am seeing a lot more issues with artifacts that only go away if the wcs_geom binning is very fine. The nice thing about using .to_wcs()
is that it guaranteed the oversampling... I guess it could be fixed by requiring a minimum wcs pixel size ratio to hpx pixel size...
%%time
energy = MapAxis.from_bounds(1,100, unit='TeV', nbin=2, name='energy')
nside = 1024
hpx_geom = HpxGeom.create(nside=nside,
axes=[energy],
region='DISK(0,0,2.5)',
)
hpx_map = Map.from_geom(hpx_geom)
hpx_map.set_by_coord((0,0, [2,90]), 1)
wcs_geom = WcsGeom.create(width=5,
binsz = 0.05,
axes = [energy])
kernel = PSFKernel.from_gauss(wcs_geom, 0.5*u.deg)
convolved_map = hpx_map.convolve_wcs(kernel)
convolved_map.plot_interactive(add_cbar=True)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for check! Indeed we should require the oversampling to avoid the artefacts. As an educated guess, I'd say an oversampling of >2 should be sufficient. So +1 to just include the check and document it.
I added another method Because the convolution happens in a full sky map, the performance is worse if the pixelization is fine. That is why, even if the input map is "small", for a usual analysis pixel size (nside = 1024 is the standard for HAWC, which corresponds to a 0.025deg pixel size), this method is much slower than
So my suggestion is to implement now the method I also compared the maps produced above with one another and this is the result of doing |
Thanks a lot for investigating @LauraOlivera! From your results it seems, there is hardly any situation where |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @LauraOlivera, for now I just left two minor comments. Let's discuss in the next co-working meeting on the details...
@LauraOlivera To keep the scope of this PR limited maybe just implement |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot @LauraOlivera! I think the code could need some polishing, however it's not a priority right now. Let's merge the PR and I'll prepare a follow up PR to do the clean up, while you and @vikasj78 can continue to work on the npred computation for HEALPix...
This pull requests implements several methods required for the convolution of a HealPix map with a kernel. There are several options to do this: the kernel being a WCS or Hpx map, the convolution happening in a small cutout or in the full sky map.
For now, I've implemented
convolve_wcs()
which is the "simplest" of these approaches in my opinion. For a "small map" (I set a limit of 10 degree width, but we can also refine that), it uses theHpxToWcsMapping
to project the initialHpxNDMap
into aWcsNDMap
, convolves with a kernel using the existingWcsNDMap.convolve()
and makes use of the mapping to go back to the initalHpxGeom
. To make this work this I implemented a methodHpxToWcsMapping.fill_hpx_map_from_wcs_data()
which just does the inverse as the existingHpxToWcsMapping.fill_wcs_map_from_hpx_data()
. Testing for this is not implemented yet. (TO DO)The caveat is that the geometry of the kernel has to be compatible with the intermediate
WcsGeom
, and so the kernel has to be extracted usinghpx_geom.to_wcs_geom()
wherehpx_geom
is the inital map geometry. That means, that if this was used for PSF convolution in aMapDataset
with Healpix geometry, the geometry of thePSFKernel
(and Map?) would need to be different than the rest of components.TO DO: The next step would be to, from an inital kernel in an unspecified geometry, extract the radial profile of the kernel in angular space (assumed symmetric), convert to a beam function using
healpy.sphtfunc.beam2bl()
and then make use ofhealpy.sphtfunc.smoothing
. This happens in a full-sky map, which would require a cutout to be stacked in a full-sky map as an initial step. I suspect this will be much slower than the case above, but I have to check. For a kernel also in anHpxMap
, there might be faster options.