-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
get smarter about how we use the pixel masks #34
Comments
@andycasey and @davidwhogg - important: make sure you train your model using the apstar spectra for all tests on apstar spectra, don't use the model trained on the aspcap star spectra - these are not equivalent. |
More specifically, we can't treat all mask bits equally; what should we care about? |
@mkness doing that now. |
@davidwhogg: good. I use everything in the mask itself not set to 0 and set the error value to LARGE everywhere were the mask is !=0. Other things may work e.g. taking only a subset of the mask that one should care about, but I did not experiment a lot with different combinations here, I found just excluding everything not set to 0 gave smaller scatter than not using the mask altogether and worked well. |
@mkness we did this at first (before stacking the spectra ourselves) but we found that a lot of spectra had persistence in the entire blue chip and part of green,... we were loosing a lot of information that we could see was actually there. did you experiment by setting a different error value for just persistence-flagged pixels? |
Just recording here that @mkness said perhaps just to consider mask bits 0, 12, 13. The current code version considers every mask bit except 9, 10, 11. |
What we have now seems to work, so I am closing this issue. |
right now we are not sure how to edit the inverse variance, given the pixel masks
The text was updated successfully, but these errors were encountered: