-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issue/60/simplest error model #66
Conversation
@rbiswas4 These are ready for a look. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, Bryce! Did you decide to keep lightcurve_example.ipynb
as a simple intro demo? If you do want to keep it, you might just tweak the headings, layout etc of this notebook so that it matches your other two notebooks. I guess the package README shoudl show links to these three notebooks as well, to help newcomers see what the Monitor can do. The code looks to be working pretty well, except for some warnings which make the notebooks hard to read. The main problem I has was in simple_error_model.ipynb
, with the following message repeated dozens if nit hundreds of times:
python/desc/monitor/monitor.py:448: UserWarning: Boolean Series key will be reindexed to match DataFrame index.
(self.flux_stats['bandpass'] == in_band))]
If you can fix the code so it does not throw these and other warnings itll make your analysis much easier to understand.
Last thing: you talk about comparisons between DM and OpSim values - but I guess its really PhoSim+DM and OpSim that are being compared. Can you add some discussion of this point please? We don't want the reader going away thinking that there are errors in the DM measurements when there are not... What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry about the delay.This is very nice in general. I have some trivial changes to request. In terms of the error model notebook: I think we should split this into a notebook called demo_ something
(comments in monitor point to this notebook) and an analysis one. Given other things I have been looking at, I am not as worried as before by the analysis discrepancy, but we should try out something else.
CreateTruthDB.py
- module name should start with lower case (pep8 is all lowercase, but I think LSST is camel)
- doc string should explain
True
for people who have not been working with this group. A reasonable confusion isTrue
incatsim
catalog vsTrue
in phoSim simulation. - https://github.com/LSSTDESC/Monitor/blob/issue/60/simplest_error_model/python/desc/monitor/CreateTruthDB.py : Attribute -> returns ?
- It might be useful to define this for
for_visits
int or sequence of ints. A useful trick in such cases is to wrap the argument inravel
:for_visits = np.ravel(for_visits)
- This is not really a Twinkles 1.1 problem, but should we worry about getting all of the truth database on a dataframe and that it might overload the memory?
- https://github.com/LSSTDESC/Monitor/blob/issue/60/simplest_error_model/python/desc/monitor/CreateTruthDB.py : Consider moving the hardcoded 11 mag (correct for the cases we are studying!) to a parameter? This code will be used in the future, and we might potentially forget the hardcoding leading to confusion.
- Why have this zero point : https://github.com/LSSTDESC/Monitor/blob/issue/60/simplest_error_model/python/desc/monitor/CreateTruthDB.py#L142 (I think DM prescribes maggies in the LSST bands as preferred units, do additional units help?)
- https://github.com/LSSTDESC/Monitor/blob/issue/60/simplest_error_model/python/desc/monitor/CreateTruthDB.py
What happens if the table exists ? (fallback) - I think it is important to specify in the docstring that if the ra, decs are in J200 or transformed coordinates
dbConnection.py
- In contrast, class names should start with upper case letters. Both new classes are lower case.
- get rid of blank lines in doc strings within the parameters section
- same comment about adding docstring on
True
and similarlyobject
is from DM stack (and not catalogs)
monitor.py
- self.flux_stats['bandpass'] == in_band))] seems to throw tons of warnings in the notebooks.
create_truth_db.ipynb
- I do not see the star_cache.db in the data folder
Notebooks: https://github.com/LSSTDESC/Monitor/blob/issue/60/simplest_error_model/examples/simple_error_model.ipynb
- There is a development (in terms of my understanding) that we should check. The opsim sky brightnesses in y are crappy: There are only two unique values of filtskybrightness in this band. The new error model for the sky is different particularly in the y band. Look at the output 15 of the comparison. Maybe we should try this on the opsim version of the values.
- The more trivial comment is that this notebook is both a demo and an analysis notebook. We should split this up and call them us such. The demo should be for people wanting to use Monitor, and we don't need to assail all such people with Twinkles analysis.
- (for future) I think many of these kinds of plots are things that we will do again and again. We should save up some of the good work here into a visualization package so that we can efficiently reuse this.
Ok. I think I've addressed most of the requests from the last update. The only thing is that instead of splitting up the error model notebook as you proposed Rahul, I explicitly mention in the README that the depth_curve notebook is a good intro to the Monitor and that the simple_error_model is a sample of the analysis we are doing with it. If I really missed anything else though let me know. Thanks. |
Sounds good enough! Let us go forward. |
Looks good, Bryce :-) However, I wrote previously:
I still think it's best to address this, even in one sentence, in the error model example notebook - and it certainly should be in the DESC Note on the Twinkles 1 error model. I think we want a caveat like the following in the conclusions: "There are two possible sources of discrepancy, either 1) the PhoSim realization of the OpSim observation, or 2) the DM processing of the PhoSim realization. If we simply want to predict DM measurements given OpSim inputs, the error model shown here (and its descendants) will enable that. However, when looking to develop an accurate inference scheme for real data, we will need to isolate step 2) by comparing with PhoSim 'inputs'. This kind of investigation will also be enabled by the Monitor functionality demonstrated here." What do you think? |
Ah, ok. I will add that in. I was interpreting your original comment as more just making sure to clearly label what the results coming out of the database were. I understand now what you meant and will add that paragraph into the conclusions. I also think that definitely helps explain where we are currently at with the error model as shown in the notebook. Thanks. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I approve my own paragraph! :-) Thanks Bryce, and well done on a nice set of notebooks. As Rahul says: Onwards!
Here to follow along development of simple error model during DESC Hack Week.