Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix SkyPointSource evaluation #2367

Merged
merged 2 commits into from Sep 21, 2019

Conversation

@cdeil
Copy link
Member

commented Sep 20, 2019

In #2366 I remove the lon wrapping in sky model init, and noticed that there's one place in the Gammapy code where we do use lon wrapping that might break with that change: SkyPointSource evaluation:

def evaluate(lon, lat, lon_0, lat_0):
"""Evaluate model."""
wrapval = lon_0 + 180 * u.deg
lon = Angle(lon).wrap_at(wrapval)
_, grad_lon = np.gradient(lon)
grad_lat, _ = np.gradient(lat)
lon_diff = np.abs((lon - lon_0) / grad_lon)
lat_diff = np.abs((lat - lat_0) / grad_lat)
lon_val = np.select([lon_diff < 1], [1 - lon_diff], 0) / np.abs(grad_lon)
lat_val = np.select([lat_diff < 1], [1 - lat_diff], 0) / np.abs(grad_lat)
return lon_val * lat_val

I don't fully understand the implementation, but presumably it's some spherical variant of this:
https://docs.gammapy.org/0.6/_modules/gammapy/image/models/models.html#Delta2D.evaluate

The only test we have is this:

def test_sky_point_source():
model = SkyPointSource(lon_0="2.5 deg", lat_0="2.5 deg")
lat, lon = np.mgrid[0:6, 0:6] * u.deg
val = model(lon, lat)
assert val.unit == "deg-2"
assert_allclose(val.sum().value, 1)
radius = model.evaluation_radius
assert radius.unit == "deg"
assert_allclose(radius.value, 0)
assert model.frame == "galactic"
assert_allclose(model.position.l.deg, 2.5)
assert_allclose(model.position.b.deg, 2.5)

@adonath or anyone that wants to work on this:

  • Probably this current implementation leads to incorrect results for maps where the solid angle isn't given by lon / lat diff (see #2276)
  • Is the implementation correct for all cases os position within a pixel like center, edge, corner, 1% or 0.01% inside? Or are there positions where numerical problems arise?
  • All flux is put in one pixel? Or is it distributed to preserve centroid?
  • Should we keep and improve this algorithm, or choose a completely new one like finding the closest pixel and then distributing flux in a centroid-preserving way into that pixel, and one to the side and on up or down?

I think in any case, the test for this complex and special model should be improved, e.g. also to put the source at lon = 0 or 180 deg, but also at a pixel center or corner, or to have a WCS where lon / lat pixel diff doesn't give the correct solid angle. (or do we have that kind of test already and I just missed it?). About half of analyses will use point source, so it's really important that we give correct results in those cases.

If we stick with the current implementation, we can probably improve it a bit, e.g. use the axis option for numpy.gradient, or to call np.where instead of np.select to save some [...] and visual clutter on that line.

I don't have the answer here, I don't know how other frameworks handle the point source case. Possibly dealing with this in a good way requires changing the spatial model interface completely to evaluate_on_geom, because only then the relevant pixel information is present?
If that's the case - let's do that?

@cdeil cdeil added the bug label Sep 15, 2019
@cdeil cdeil added this to the 0.14 milestone Sep 15, 2019
@cdeil cdeil added this to To do in Modeling via automation Sep 15, 2019
@cdeil

This comment has been minimized.

Copy link
Member Author

commented Sep 16, 2019

Here's a notebook where I did some checks:
https://nbviewer.jupyter.org/gist/cdeil/3d721e33df65f507a46c1a395b3164b0

I think the incorrect fluxes at high lat are a real issue that we have to resolve?

I was expecting that the lon_diff < 1 and lat_diff < 1 in the current implementation in combination with the gradients could for some sub-pixel positions lead to completely incorrect results where the point source "falls through" and one gets zero or half the correct flux, for thin stripes at pixel edges because the gradient is slightly different from one pixel to the next. I didn't find this severe bug yet, not sure if it exists or not.

Still, my recommendation and strong preference would be that we change sky models to work in the local WCS approximation at the source center, and evaluate using pixel coordinates. It's simpler, faster, consistent with how regions are implemented, and consistent with the fact that we do PSF convolution already on cartesian grid and that will remain. Yes, we wouldn't get correct results in images where WCS is extremely distorted, but that's anyways the case because PSF convolution doesn't work properly there (and never will).

@adonath @registerrier - Thoughts?

@cdeil

This comment has been minimized.

Copy link
Member Author

commented Sep 16, 2019

@lmohrmann - Did you use SkyPointSource for the HESS validation paper?

I think with the current implementation, point source fluxes should be incorrect, underestimated by a factor cos(latitude). (Is this correct, or am I misunderstanding what's going on?)

For Crab at DEC=22deg, that's only 7%, and for PKS at DEC=-30deg, that's only 13%, so the error was by chance within HESS systematics. At higher DEC the effect would be stronger, e.g. 30% at DEC=45deg, so possibly other AGN analyses, or also Galactic analyses if someone used RA/DEC maps could have been incorrect.

There's also inter-pixel variations in the results that I don't fully understand yet, see e.g. the image in cell 22 here: https://nbviewer.jupyter.org/gist/cdeil/c4f978a86f53aef8c49b583d3e830540

It could be that for Gammapy v0.12 the bug wasn't present or smaller, because pixel solid angle was computed incorrectly in the Map.solid_angle method (was fixed in #2276 and shipped in Gammapy v0.13), and also the SkyPointSource.evaluate method and the effects either completely or mostly cancelled.

I discussed this with @adonath at lunch today - the idea is to fix SkyPointSource evaluation this week, so that it's corrected in v0.14 next week. Not sure yet how to change the spatial model evaluation exactly - the options discussed so far are to special-case SkyPointSource evaluation in MapEvaluator, and / or to change the interface for all spatial models to evaluate_on_geom which then means better evaluation / integration is possible there, because pixel edges & solid angle info is then available (and can be cached on the geom object, available in an efficient way). Either way, the current SkyPointSource.evaluate with the buggy division by an incorrect pixel solid angle wouldn't be present any more.

@registerrier @adonath or anyone interested - thoughts?

@lmohrmann

This comment has been minimized.

Copy link
Contributor

commented Sep 17, 2019

@cdeil
Yes, we used SkyPointSource in the validation paper (for the Crab and PKS2155, obviously). The results are pretty consistent (they're in fact a bit higher compared to the other tools), so I guess you might be right and the problem was not present in v0.12.

@cdeil

This comment has been minimized.

Copy link
Member Author

commented Sep 20, 2019

@adonath and I did some pair coding -- a regression test for the point source, and a fix is now attached, using the evaluate_geom solution for spatial models discussed above.

We suggest to merge this now as-is, and then @adonath plans to continue to improve the geom & coords caching introduced in #2377, probably it needs to be added for geom.to_image.

@cdeil cdeil changed the title Improve SkyPointSource evaluation and tests Fix SkyPointSource evaluation Sep 20, 2019
@cdeil

This comment has been minimized.

Copy link
Member Author

commented Sep 20, 2019

@adonath - I think https://nbviewer.jupyter.org/gist/cdeil/c4f978a86f53aef8c49b583d3e830540 is now giving correct results, but it's slower than a few days ago (started 20 min ago, still running). Was expecting it to be about the same. But I'm not sure how long it ran previously really, maybe it was very slow already.

The test notebook runtimes seem OK:

INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/spectrum_simulation.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 12.1 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/detect_ts.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 19.5 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/image_analysis.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 23.4 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/hess.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 28.8 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/fermi_lat.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 19.4 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/intro_maps.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 6.8 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/cta_sensitivity.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 5.2 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/cwt.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 17.0 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/image_fitting_with_sherpa.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 35.3 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/light_curve.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 26.6 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/sed_fitting_gammacat_fermi.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 6.3 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/spectrum_fitting_with_sherpa.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 5.8 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/analysis_3d_joint.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 63.5 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/cta_1dc_introduction.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 17.5 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/background_model.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 11.6 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/spectrum_analysis.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 15.2 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/mcmc_sampling.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 28.0 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/astro_dark_matter.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 9.9 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/source_population_model.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 6.1 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/hgps.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 9.4 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/first_steps.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 7.3 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/cta_data_analysis.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 31.6 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/simulate_3d.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 20.9 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/analysis_3d.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 107.8 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED
INFO:gammapy.scripts.jupyter:   ... EXECUTING temp/pulsar_analysis.ipynb
INFO:gammapy.scripts.jupyter:   ... Executing duration: 7.3 seconds
INFO:gammapy.scripts.jupyter:   ... PASSED

My suggestion would be to merge this soon and continue, and if there's performance issues somewhere we'll track them down in the next days.

@adonath

This comment has been minimized.

Copy link
Member

commented Sep 21, 2019

Thanks @cdeil! I'll merge now and prepare a follow-up with some cleanup of the caching logic in the MapEvaluator.

@adonath adonath merged commit ae2c95a into gammapy:master Sep 21, 2019
8 of 9 checks passed
8 of 9 checks passed
Codacy/PR Quality Review Not up to standards. This pull request quality could be better.
Details
Scrutinizer Analysis: 6 updated code elements – Tests: passed
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
gammapy.gammapy Build #20190920.8 succeeded
Details
gammapy.gammapy (DevDocs) DevDocs succeeded
Details
gammapy.gammapy (Lint) Lint succeeded
Details
gammapy.gammapy (Test Python36) Test Python36 succeeded
Details
gammapy.gammapy (Test Windows36) Test Windows36 succeeded
Details
gammapy.gammapy (Test Windows37) Test Windows37 succeeded
Details
Modeling automation moved this from To do to Done Sep 21, 2019
@adonath adonath referenced this pull request Sep 21, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Modeling
  
Done
3 participants
You can’t perform that action at this time.