-
Notifications
You must be signed in to change notification settings - Fork 137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix count rate estimation error for particularly bad GTIs #798
Conversation
Hello @matteobachetti! Thanks for updating this PR. We checked the lines you've touched for PEP 8 issues, and found: There are currently no PEP 8 issues detected in this Pull Request. Cheers! 🍻 Comment last updated at 2024-02-21 09:20:51 UTC |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #798 +/- ##
==========================================
+ Coverage 96.27% 96.39% +0.11%
==========================================
Files 44 44
Lines 8855 8866 +11
==========================================
+ Hits 8525 8546 +21
+ Misses 330 320 -10 ☔ View full report in Codecov by Sentry. |
85ffe28
to
df8c43f
Compare
…es that are used for estimation
df8c43f
to
205c415
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On a philosophical level, I think I'd prefer an if-else logic where you first try to set either ctrate
variable to something sensible and only if that doesn't work set it to NaN
, rather than set it to NaN
by default. This is mainly because it makes it easier to think through the logic ("this should be a value, and if it's not, we'll set it to NaN so things fail gracefully"). But that seems like a bit of an abstract point I'm not sure is worth arguing about. :)
Good point. Sometimes I tend to think about the logic that makes it easier to cover all options with tests, rather than making the code easier to read 😅. I changed the logic slightly following your suggested approach |
In
StingrayTimeseries.fill_bad_time_intervals
, when the buffer size is small and count rates are low, there was the possibility that the count rate estimation failed with a numpy error. E.g., the testtest_no_counts_in_buffer
I added failed. Here, I catch the condition and warn that this is happening. Also, I made the estimation more robust when only one of the GTIs on the side of the bad time intervals has good data.