-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reduce pvss computation via find_peaks #195
Conversation
@CrepeGoat @theflanman you may have a comment, no pressure though (hope all is well)! |
Codecov Report
@@ Coverage Diff @@
## development #195 +/- ##
===============================================
+ Coverage 69.09% 69.56% +0.46%
===============================================
Files 34 34
Lines 2964 3036 +72
===============================================
+ Hits 2048 2112 +64
- Misses 916 924 +8
📣 Codecov can now indicate which changes are the most critical in Pull Requests. Learn more |
Next I will add a function to generate these 4 coordinate plots: |
@StokesMIDE in looking back at older commits, the bulk of this PR was passing on my second commit: https://github.com/MideTechnology/endaq-python/pull/195/checks?sha=d1968621e33c02ebf18046e7d024bdb2d3dc0724. So I went ahead and made a new branch off that commit to try and do a PR on (#197) but that's now failing... no code has changed since it passed. So in this updated PR that is now passing, I just commented out the unit tests that were causing issues, they pass when I run it locally. I'm not sure how to proceed here? Do we merge this in and then create an issue to later figure out how to get these unrelated unit tests to pass again someday?
|
Added simple retry to `test_get_doc()`
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The tripartate plot is out of my wheelhouse, so I can't critique too deeply, but the code looks sound.
I'll need to dig into the batch tests, I think it's because I added the resultant...
But basically, I'm using
find_peaks
to only process the shock response function on peak events. For the recording I was testing on, it takes over 2 minutes to calculate the spectrum on the whole range, but then 2 seconds by just using the peaks. For shorter recordings with repeating events it's not as drastic a savings but for one went from 7 seconds to 2 seconds. The question is... by doing this we may be missing some lower frequency content slightly.Here's one axis comparing the full vs the peak calculation and they match exactly (there are two lines drawn over each other):
![image](https://user-images.githubusercontent.com/35080650/169156776-1bdcada7-8d7b-44f6-b6f1-dbd2eb5f8f2c.png)
But for the less prominent axes we can see this deviation slightly. I'm not sure if this a big enough deal where we will allow the user to effectively turn off this performance feature via setting
![image](https://user-images.githubusercontent.com/35080650/169156972-03b1bfb9-e80f-4c72-b055-1673e7506f8b.png)
max_time
to a very large number.