-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.2i validation updates #259
Comments
Great stuff, Javier. Thanks! My best guess for the g-band astrometry errors is differential chromatic refraction (DCR). That's stronger in g-band than in redder bands. The direction of the effect is to push away from zenith, so it's not natively an RA effect. But it's possible that for the 100 visits here, the mean effect is in the RA direction primarily. Especially if the observations are near Dec = -30 and are not symmetrical between positive and negative HA. |
That data is surprisingly shallow. Is it taken under bad conditions? What reference catalogue are you using for astrometry? Is there an affect as a function of colour? |
It looks like there are just 2 visits in g-band and 1 visit in i-band being considered here. The numbers ~100 and ~300 must refer to sensor-visits. |
I'd still expect to go a good deal deeper than 20--22 per visit |
@RobertLuptonTheGood yes, I am surprised too. I checked the background levels and, for the i-band visit that I tested, the mean background is ~1650 while the fiducial sky level is 1150. Using equation 6 in Ivezic et al. 2008 this means a limiting magnitude ~0.2 brighter. The fiducial limiting magnitude according to this link is 23.9 (so in our case it should be ~23.7) and I think we are still far away from that. For the g-band visits, both have ~400 ADU for sky, which is around the fiducial value. The fiducial depth is 24.8 and I would say that this is not the case that we are seeing here... Any ideas @rmjarvis @cwwalter @jchiang87? |
I'm providing links to the DESCQA runs in these visits for the image 1D histograms and power spectra. Please, ignore the plot titles since I didn't update them: |
I'm quite sure I don't understand magnitudes or zero points or possibly the difference between completeness magnitudes and limiting magnitudes. But, that said, it looks like we are complete to 22 and probably limiting somewhere around 25 in g-band according to the "Detection efficiency for stars" plot. Is that inconsistent with the fiducial values? |
Good eye @rmjarvis. Yes, the g-band limiting magnitude is ~25 (consistent with the fiducial values) for one visit. This is the 1D histogram: And this is the HEALPix map: |
I can easily imagine 0.34 magnitudes shallow being due to less than optimal choices for various parameters in the object detection step. I'm more familiar with SExtractor's parameters (which I don't claim to really understand -- just that I have more familiarity with them), but I assume there are analogous parameters to its N sigma above the noise and how many contiguous detected pixels are required. With SExtractor at least, poor choices for these can lead to significantly fewer detections. So, do we know what values were used here? And whether these can be tweaked to maybe probe slightly fainter objects? |
If I read that right, we require at least 1 pixel at 5 sigma. This won't ever detect a 5-sigma object then, since they always (since the typical PSF spans several pixels) have signal over more than 1 pixel. I suspect the "fiducial limiting magnitude" is in terms of 5-sigma point sources, so we're probably not hitting that. In DES, we usually set the threshold to around 1.4 sigma and require something like 6 contiguous pixels. This ends up detecting quite a few spurious objects, which we throw out downstream. But better to "detect" some noise fluctuations and remove them later than not detect some real but faint objects. |
Interesting. In MUSYC, which had similar depth to LSST but somewhat worse seeing, we ran Source Extractor in a reasonably common mode where you first convolve the image with the PSF and then detect as objects any single pixel with a significance level of ~1.5, which corresponded to a 5 sigma single pixel pre-convolution. That's a good method for detecting point sources while reducing contamination from detector noise. If no such convolution is being performed, it makes much more sense to require several contiguous pixels at individually modest significance levels like @rmjarvis mentioned for DES. It wasn't clear to me from this discussion if we're doing the convolution or not, but if we are, the single pixel requirement should be much lower than 5 sigma in units of the post-convolution S/N. |
Actually, I forgot about the PSF convolution. We do that too in DES. I guess that means a 5-sigma point source would usually have a single pixel ending up at least near 5 sigma. (Maybe only at 5 sigma if the star was centered on the center of a pixel? Not sure.) Anyway, I guess the low-threshold, multiple contiguous pixels bit is probably more for detecting galaxies then. For stars, you could probably get away with only a little lower than 5 sigma in a single pixel. |
@jchiang87 made a good point about this in the data access telecon this morning. I didn't check the airmass nor the seeing in those visits (and one of them has pretty high airmass) so, the ~0.4 limiting magnitude difference is possibly due to this. I'll make the calculation including all observing conditions to make sure that it makes sense. |
A single pixel over threshold in the likelihood image (i.e. the PSF-convolved image) is enough to detect point sources, hence the choice. I agree with you about there being a slight bias towards detecting things centred in a pixel. |
@rmjarvis the expected depth for the g-band visit above is ~24.51 (159494) so what we get (24.46) is not that far from this as you first said. So the question left is about the stellar density/completeness. |
Is 3 magnitudes between limiting magnitude and completeness magnitude reasonable? Sounds like a lot to me. 3 mag brighter than 5 sigma should be 80 sigma (16 x 5). I would have thought we would be pretty complete for point sources at significantly lower S/N than this. |
I think there is indeed a disconnect here worth further discussion. I think @rmjarvis is referring to the plot early in this issue of "Detection efficiency for stars" which starts to roll down at mag |
Apologies to all. I discovered that I was testing the purity of the S/G classifier convolved with the detection efficiency instead of the pure detection efficiency since I was adding the requirement that |
Below, I am attaching a set of slides with plots similar to those above: In general, 1.2i looks good. For z and y bands, there are many exposures with the Moon above the horizon and high background levels. For those images, the objects seem to leak into the background (or being over-subtracted), biasing the photometry a bit and lowering the depth with respect to what's expected. From the discussion in #desc-dm-dc2 it seems that this is not a showstopper and may be fine-tuned in later stages. I will compare these visits to their PhoSim counterparts to see if there's something useful that we can learn from it. I haven't checked galaxy shapes yet. |
Do you mean something beyond the degradation in the S/N ratio from the increased background level? |
@wmwv I don't really know. The problem that I am seeing is that the histogram with the magnitude difference between input (reference catalog) and output is not centered at zero for these images in the z and y bands, not even for the objects used for photometric calibration. This bias is small (<~ 20 mmags). What I think is going on is that the background is over-subtracted and the zeropoint is slightly brighter than it should. However, I can be completely wrong since I am no expert on this. Maybe this is a different effect kicking in. Any insights are welcome. |
I made some plots using r-band visit 181900: Whisker plot (using Same plot for imSim (1.2i): Comparison between e1 and e2 for matched objects (detected in both PhoSim and imSim) using HSM e1,e2 (regauss):
And comparing the distribution of the module of the measured ellipticity for all (matched) objects: I will repeat these plots for other bands. @rmjarvis is there anything else you'd like to see. @jmeyers314 do these whisker plots make sense? |
@fjaviersanchez It looks like there is an overall scale factor difference in the whisker plots that aren't in the histograms. The scale says that the arrow lengths are the same. Is that correct? |
@cwwalter - I believe the whisker plot is the PSF ellipticity, while the histograms are for galaxies, so they have rather different information in them. |
Great. These look good Javi. I think it would also be useful to make the same histogram plots for the stars as you did for the galaxies. I am a little surprised the PhoSim PSF whiskers are so large. The ImSim whiskers look more consistent with what I would expect from DES data, so I think our atmospheric model there is pretty reasonable. Although admittedly, the exposure times are very different, so my intuition might not be right here. (@jmeyers314 may have comments as well about this, since he's looked at more of these than I have, I suspect.) Another one that would be nice is a size/magnitude diagram. x-axis is magnitude, y-axis is T=Ixx+Iyy. Maybe color code the stars vs galaxies. We should see a nice flat locus for the stars that tips up at the bright end. This latter effect is the first brighter-fatter, and then saturation. I think we should be able to see the B/F effect even on single exposures, but it will be subtle. |
As we are now referring people to this extraordinarily long thread for information about ImSim and PhoSim PSF ellipticities, and the thread has many twists and turns, it would be beneficial for the thread to have a summary. I’m providing one here, in two parts that will appear in separate comments. (I will also edit the very first comment in the thread to point to this summary, so people know there is one; and point to this summary from issue #267.) If anybody has a concern about / disagreement with this summary, please comment here; I will edit the summary to correct any mistakes pointed out by others, so that hopefully it can stand on its own. Summary of ImSim PSF ellipticity investigationsFor the set of visits that were checked, the total PSF ellipticity in ImSim Run 1.2i was found to be relatively too round compared to that from PhoSim and compared to expectations from the LSST SRD. In the LSST SRD section 3.3.3.3, table 13 gives the following design specifications for r and i band: for seeing of 0.69 arcsec and a zenith distance of <10 degrees, the median ellipticity (defined using the (1-q^2)/(1+q^2)) should have a median no larger than |e|=0.04, with <=5% exceeding |e|=0.07. This is the total ellipticity including all contributions. The ImSim results were more like 100% with |e| < 0.07 and the median well below 0.04, so nominally satisfying the LSST SRD requirements, but with more margin than we’d expect in reality. By comparison with the literature and some internal tests, the consensus was that the ImSim atmospheric PSFs were roughly acceptable (e.g. the discussion around and following this comment) and the optical PSF contribution is more likely responsible for the discrepancy. We know the optical PSFs come from a very limited number of simulations and do not contain all the physics expected in reality (c.f. comments from Bo, who ran the AOS simulations). Josh followed Bo’s recommendation to enhance the optical PSF aberrations until the PSFs in his small-scale testing setup had a more typical ellipticity distribution. He also identified a few bugs in some recently-implemented ImSim functionality, and fixed those bugs (c.f. Josh's summary here). We should close the loop and check that this has worked as expected in Run 2.0i, by checking the observed PSF ellipticities. |
Summary of PhoSim PSF ellipticity investigationsFor the set of visits that were checked, the total PSF ellipticity in PhoSim Run 1.2p was found to be relatively too elliptical compared to expectations from the LSST SRD. In the LSST SRD section 3.3.3.3, table 13 gives the following design specifications for r and i band: for seeing of 0.69 arcsec and a zenith distance of <10 degrees, the median ellipticity (defined using the (1-q^2)/(1+q^2)) should have a median no larger than |e|=0.04, with <=5% exceeding |e|=0.07. This is the total ellipticity including all contributions. Javi has accumulated statistics across 22 r-band visits in Run 1.2p with seeing between 0.6" and 0.8" and alt>80 deg, which should be expected to meet these requirements. However, the PSFs are roughly a factor of 2 too elliptical: median |e|~0.06 and 5% exceeding 0.14. (See plots in comments below this one.) John has reported that PhoSim PSFs were consistent with the statements about PSF ellipticity in the LSST SRD at some point (I am not sure which PhoSim version). Since this does not appear to be the case in Run 1.2p, further investigation is warranted. Possibly we should check a wider range of visits. |
I checked some of the available visits in 1.2p with the latest processing and the results are attached below. Please, feel free to request any changes or new plots. Thanks! |
Thanks very much for this, @fjaviersanchez ! I was wondering if you could clarify a few things about the plots:
|
@rmandelb thanks for your feedback, I am putting some of the answers to your questions below and will address your suggestions in the next iteration of the plots
These are the ellipticity values for objects matched to stars (so not randomly located in the focal plane but almost). Each panel corresponds to a single visit as you mentioned.
I will add the seeing in the title and move the definition of |e| to the x-label. Thanks!
These are different visits than before (since before I just picked randomly without any altitude restrictions and the ones I got had lower altitude). I am limited by the availability of visits in the right range of zenith angle (I checked the instance catalogs and there are ~20 in r-band and another ~20 in i-band with alt>80 deg.). The selection of the visits is automated and I can accumulate all visits in the same histogram if that's more useful (I can also add 0.6" < seeing < 0.8" to the list of restrictions).
Sure! Thanks for the tip! |
I think that would be really helpful for use of this script in general (for any of the test runs so far). Thank you! |
@rmandelb @rmjarvis here I am putting all the r-band visits (so a total of 22 visits) with seeing between 0.6" and 0.8" and alt>80 deg for run 1.2p. The black line shows the median value of the distribution and the red line shows the 95th percentile: Please, let me know if you have any other suggestions for this plot. |
Thanks Javi. Update: Fixed two typos pointed out below by Rachel. |
Since I keep getting this backwards let me just ask: if you were to use the "other" definition of e which is a factor of two different, would that make Javier's measurement agree with the requirement or make it twice as bad? |
@cwwalter I think that with the definition for |e| that I was using before you would divide by two and that would make it roughly agree with the requirements in LSST's SRD. |
@cwwalter @fjaviersanchez - The LSST SRD uses the definition in the plots. If we want to use the other definition, then we'd have to translate the requirements to that definition, so the disagreement would be equally bad or good (both the |e| values and the requirements we compare against would change by a factor of 2). If you only change these plots and don't change the requirements, then you'd be comparing apples and oranges. @rmjarvis - I agree with your interpretation (though you have two typos: your 0.6 and 0.4 should be 0.06 and 0.04, which you might want to fix to avoid future confusion). I am also comfortable that we have an ensemble of visits that (a) meet the range of zenith angle and seeing for which the LSST SRD requirement is placed, and (b) is large enough that this is unlikely to be a fluke -- so I will update my summary comment to reflect this. @fjaviersanchez - thanks again for putting this script together. It'll be useful to run on Run 2.0i once there is a sufficient sample of processed visits in the relevant bands. Is this checked into this repo or DC2-analysis or somewhere else? It would be good to make sure we have this somewhere for posterity, since it's a useful diagnostic for any simulation run. |
Hi Rachel, Yes, I understand. The reason I asked is that I was wondering if a convention misunderstanding between the PhoSim team and the SRD authors could have led to thinking the requirement was being met in PhoSim before. Just gathering info/clues. |
Got it. Yes, if measurements are done with the other convention than the one that is written in the SRD, then the |e| distributions would appear to meet the requirements. |
I have the code here and will add it to DC2-analysis. Thanks for the suggestion. |
Re the PSF ellipticity difference between ImSim and PhoSim: I think it's probably not so bad in fact to have a difference between the two simulations. There is quite a lot of confusion about how the SRD requirements were derived. In particular, they were apparently derived from PhoSim simulations of the expected LSST optical system and atmospheric parameters, including comparing to short-exposure Subaru data for the atmospheric part. Rachel says she and Chihway went back at looked into that thread again, and it's not clear that the current PhoSim results are actually too elliptical. There were similar confusions about the ellipticity definitions at the time (not unlike in this thread), and it is at least possible that PhoSim is doing the right thing and the SRD is artificially too round in its requirement. On the other side, we think that ImSim is probably doing a good job of matching the SRD requirements as currently written, and we even think from first principles that the atmospheric part is basically correct, and the optical part is plausibly close to correct, with a big caveat that we just arbitrarily amplified the range of structural deviations that will drive the optical aberrations. So I think a fair summary is that ImSim and PhoSim span the plausible range of PSF properties (ellipticity in particular) that we might expect from the real LSST camera. Hopefully it won't be much worse that PhoSim's level. And it seems unlikely it would be much better than ImSim's. Since reality will likely be somewhere in this range, having two simulations that bracket the plausible range seems like a good thing. Eventually, commissioning will tell us more about what to expect with respect to both components of the PSF, so for the next round of simulations, we can more accurately try to target the realized PSFs as seen on the actual LSST telescope and camera. Until then, I propose to just keep the ImSim and PhoSim PSF models as they currently are. |
I agree with Mike's proposal. To amplify a bit on his comment: the Subaru (optics + atmosphere in short exposures) vs. PhoSim (atmosphere only, optics turned off) comparison that Chihway did in collaboration with John and others back in 2011, using short-exposure Subaru data from James Jee, showed quite similar PSF ellipticity distributions for those two cases. So, if Subaru optics are essentially negligible in their contributions to the PSF ellipticities in the Subaru images, then this provides a validation of the atmospheric PSF ellipticities in PhoSim. If the Subaru optics contributions to the PSF ellipticity are not negligible, then this comparison suggests that the atmospheric PSF contributions in PhoSim are too elliptical. While Josh had thought there was evidence that (at least for HSC) the Subaru optics are a non-negligible contributor to the PSF ellipticity, we do not seem to have data of the right type to strongly distinguish between these possibilities and do a proper quantitative comparison that appropriately accounts for the Subaru optics. So that is why we are proposing that at the DC2 level, it may be appropriate to punt on this question of the appropriate level of PSF ellipticity in LSST images, relying on the simulations for now to potentially bracket reality and on future simulation campaigns to use an LSST data-driven approach to validation of the PSF ellipticity distributions. |
Rachel-
I largely agree with you and Mike. I don’t think you should read too much into the Subaru comparison as it is a different physical system entirely so you would expect a different fraction of ellipticity for different telescopes for a variety of reasons. There are also a lot of aspects of the atmosphere conditions in the simulations, it depends whether you subtract off the common mode ellipticity, exposure time, guiding on vs. off, that all affect the ellipticity as noted in our previous publications.
Another topic: I also am not sure about what you mean about ImSim & PhoSim bracketing reality. ImSim uses PhoSim aberrations from AOS simulations (BTW, I’m not sure this is appropriate to simply stick this into another code), but ImSim should simply agree with PhoSim on the ellipticity distribution, so there must be some implementation mistakes.
Regards,
John
|
Thanks all - this is a useful discussion. Mistakes are one possibility, different choices in implementation are another - I'm not sure the best way for us to track them down, but the joint method will be the same in each case I think. Regarding how ImSim uses Phosim, I would think that making good use of PhoSim's outputs was fair and appropriate use of the PhoSim code, provided correct attribution is given, of course. For example, in the comments above I see Josh explaining the origin of the optical PSF model in ImSim at #259 (comment) and then #259 (comment). We'll need to capture that piece of methodology (the interplay between Zemax, PhoSim and ImSim) with all relevant citations in the ImSim and DC2 survey papers. Do the relevant docstrings also include the appropriate links? |
Hi Phil and John and all. Yes, as Phil summarized above and pointed to comments by Josh and all this optical work (mostly done by @djperrefort) has a model which is independent of PhoSim. The atmospheric model is of course completely independent, For the optical aberration PhoSim info was used in the following way: Both the nominal optical state (from @aaronroodman) and the sensitivity matrix (the response to the optical system's dof) were built with Zemax independently of PhoSim by @bxin. Those are the parts that are embedded in the algorithms which are in imSim. However when Bo wanted to come up with a handful of representative optical states with a Python program he ran the PhoSim based AOS program which had been validated with Zemax. That PhoSim program is an active loop simulation and so it decides how much to perturb the input dofs to get the system into alignment. We have (I think 7) examples of this from running the program. So we used the distribution of those deviations that were applied during the PhoSim run to extract a sigma to use with our random number generator as the input to the algorithms in imSim. Then, after the ellipticity study those random input deviations were artificially inflated in order to better match the LSST SRD requirements. So now repeating the comment from @jmeyers314 with that context: "In ImSim, we're using a very small set of simulations, done with phosim, of the active optics system being applied to a series of consecutive exposures. I.e., we simulate a telescope, including misalignments/deformations, see what donut images are produced in the simulation, run the correction for the active optics to remove misalignments and deformations, and then proceed to the next exposure. The standard deviations of the uncorrected misalignments/deformations are what we're using in ImSim. The problem is, there were only 7 exposures in the simulation above. So it's quite likely that we haven't captured the full range of residual misalignments that LSST will experience in practice. That's why I think it may be totally reasonable to increase the misalignment coefficients (but use the same Zemax sensitivity matrices)." Hopefully @djperrefort @bxin or @jmeyers314 will correct any mistakes I might have made above. |
Chris-
Please remove the PhoSim aberration results. it is not appropriate to copy years and years of work, and then shoehorn it into another code without diligent citation & study. Furthermore, some of what you say below doesn’t even make sense— misalignments don’t even cause PSF ellipticity.
Regards,
John
On Nov 20, 2018, at 3:20 PM, Chris Walter <notifications@github.com<mailto:notifications@github.com>> wrote:
Hi Phil and John and all.
Yes, as Phil summarized above and pointed to comments by Josh and all this optical work (mostly done by @djperrefort<https://github.com/djperrefort>) has a model which is independent of PhoSim. The atmospheric model is of course completely independent, For the optical aberration PhoSim info was used in the following way:
Both the nominal optical state (from @aaronroodman<https://github.com/aaronroodman>) and the sensitivity matrix (the response to the optical system's dof) were built with Zemax independently of PhoSim by @bxin<https://github.com/bxin>. Those are the parts that are embedded in the algorithms which are in imSim.
However when Bo wanted to come up with a handful of representative optical states with a Python program he ran the PhoSim based AOS program which had been validated with Zemax. That PhoSim program is an active loop simulation and so it decides how much to perturb the input dofs to get the system into alignment. We have (I think 7) examples of this from running the program. So we used the distribution of those deviations that were applied from the PhoSim run to extract a sigma to use with our random number generator as the input to the algorithms in imSim. Then, after the ellipticity study those random input deviations were artificially inflated in order to better match the LSST SRD requirements.
So now repeating the comment from @jmeyers314<https://github.com/jmeyers314> with that context:
"In ImSim, we're using a very small set of simulations, done with phosim, of the active optics system being applied to a series of consecutive exposures. I.e., we simulate a telescope, including misalignments/deformations, see what donut images are produced in the simulation, run the correction for the active optics to remove misalignments and deformations, and then proceed to the next exposure. The standard deviations of the uncorrected misalignments/deformations are what we're using in ImSim. The problem is, there were only 7 exposures in the simulation above. So it's quite likely that we haven't captured the full range of residual misalignments that LSST will experience in practice. That's why I think it may be totally reasonable to increase the misalignment coefficients (but use the same Zemax sensitivity matrices)."
Hopefully @djperrefort<https://github.com/djperrefort> @bxin<https://github.com/bxin> or @jmeyers314<https://github.com/jmeyers314> will correct any mistakes I might have made above.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub<#259 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AJbT8l0Z98ds3VuTai0WoEmXpCAQL76Zks5uxGQggaJpZM4WZ-Cw>.
———————————
John R. Peterson
Assoc. Professor of Physics and Astronomy
Department of Physics and Astronomy
Purdue University
525 Northwestern Ave.
West Lafayette, IN 47906
(765) 494-5193
|
John: Regarding the optical aberrations, Chris is describing how the PhoSim outputs are used to calibrate ImSim. As I wrote above, that seems to me to be fair use of a public software package. I understand many similar tools in use in High Energy Physics take a similar approach (GEANT4, for example). I agree that when using the products of an external package, diligent citation is required - and I expect any papers written about ImSim or its outputs will include that, as well as the code documentation itself. I think the way ImSim should be viewed in this case is as building on your PhoSim work on optical aberrations, rather than copying it. |
No this is not appropriate. Please remove it as soon as possible. You have to actually "build upon it", and not just move it somewhere else and call it something else (i.e. copying without attribution). Furthermore, it is producing incorrect results, so you might want to at least think about that. |
Also saying "The atmospheric model is of course completely independent" and saying "independently of PhoSim" is the definition of non-attribution. I am disappointed in the ethics of this discussion. |
While I value a lot the importance of setting this right, and value even more the peaceful conclusion that all the concerned parties will no doubt reach through discussion/acknowlegements etc..., I believe that this now needs to move out of this specific github issue and into a more private set up. Please. |
and the code used to do this is not public. |
Hi John, Unfortunately I have too many deadlines due right now to discuss this in detail today but I'm afraid there must be some mis-understanding. There isn't anything copied anywhere to remove. The atmospheric ray-tracing code we are using in GalSim was written by Josh based on his long work on atmospheric PSFs etc that you have seen at places like collaboration meetings and he used the FFT code there to compare validate it and understand how well it was working. Our approach for the parametric optics response was to write an algorithm suggested by Aaron that uses the output of our nominal optics and response of our system modeled in Zeemax by Aaron and Bo. Once we had all the code to do this written and validated, we wanted to know during an observing run how much the activators actually move each d.o.f. in the system under closed loop. This is an input to the program. To do this we used the output of the LSST SE engineering groups studies from Bo. The code that produced that result we used includes PhoSim. These estimates from SE of how much the AOS varies the activators are the input to the program. Of course we don't want to hide this. We want to give credit to all involved and are documenting it. But, as Johann says this is not really the place for this discussion as this thread and associated issues are being actively used by the validation team. Sincerely, Chris |
atmosphere: No, it comes from Jernigan, was developed in 2012 in Peterson & Peng (PIN14), Kahn (PIN15), and published in Peterson et al. 2015. optics: This used proprietary content that was years of work, and is not part of the validated open source licensed releases. So this should be removed, and its not really appropriate to just copy this anyways. |
Thanks for all your inputs, everyone: we'll take Johann's advice, and move this discussion offline. Let's start new issue threads for new topics regarding data validation. |
In this issue I wanted to keep track of the status of the validation of 1.2i. (Note added by Rachel: a summary of PSF ellipticity investigations from this very long thread can be found here.)
Exposure checker: @kadrlica found some issues with the sky level gradients. More details at the #desc-dc2-eyeballs channel. Follow up needed for run 2.0.
Preliminary comparisons of calexps between imSim KNL and imSim Haswell show perfect agreement (only g-band available for now):
Here I am plotting the difference between the position of the detected objects in 100 imSim KNL visits (
src
catalogs) and imSim Haswell visits, the detected objects lie in the same exact positions:I am showing a cutout to demonstrate that the centroids lie in the same positions:
And here I am plotting the difference of Kron measured magnitude for the same catalogs. Again the agreement is perfect:
Here I compare the position of PhoSim detected objects with matched imSim detected objects:
And here I compare their Kron measured magnitudes:
Finally I show a cutout with imSim image and the measured centroids for PhoSim (+) and imSim (x):
4.1: i-band (using 100 sensor/visits)
Astrometry residuals for the calibration objects (are we still using some galaxies or are these matching mistakes??)
Photometric residuals for calibration objects only
Photometric residuals for stars (
extendedness==0
and matched to a star in the input catalog):Astrometric residuals for stars (defined as above)
Detection efficiency of stars:
4.2 g-band (300 visits for now)
Photometric residuals for calibration objects only:
Astrometric residuals for calibration objects only:
Photometric residuals for stars (defined as above):
Astrometric residuals for stars:
Detection efficiency for stars:
Should we be worried about these astrometric residuals in g-band? @RobertLuptonTheGood @wmwv @jchiang87 @cwwalter
For g-band I am using visits 159494 and 183811.
The text was updated successfully, but these errors were encountered: