New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: Bessel filters with different normalizations, high order #5279
Conversation
Test failures: might want to import xrange from scipy._lib.six for python 2/3 compat. |
Ok I don't understand the remaining errors.
Are those floats not exactly 1? (They are on my machine) and then in NumPy 1.6.2 it isn't possible to evaluate a polynomial made of long ints?
|
|
@endolith for me |
I've tested this a bit and the speed penalty is about 5x-30x for reasonable size filters. Which given that it takes order 50us - 10ms to construct one with this PR is not an issue imho. You could have kept the hardcoded numbers and added the new method only for orders for which no hardcoded numbers are available. But this works and is ultimately clearer. So keep as is I'd say. |
|
||
x += alpha / (1 + alpha * beta) | ||
|
||
assert all(np.isfinite(x)), 'Root-finding calculation failed' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be stripped from the code when run with python -O
, please don't use plain assert
. I suggest using
if not all(np.isfinite(x)):
raise RuntimeError('Root-finding calculation failed')
I suspect that a full implementation of Bini's algorithm may be of wider interest, not sure where though. @charris maybe for |
@rgommers I'd certainly be open to an algorithm that could improve on the companion matrix approach, which I suspect is not at it's best for the polynomial case. If nothing else, having an implementation would make it available for others to borrow. |
Well the coefficients should be exactly 1 for a passthrough, and that can be exactly represented in floating point, so is it failing because it's producing a number very close to 1 or it just doesn't consider them equal? |
Well assert is for catching programming errors which are stripped out for speed, right? I'm not sure which tests should use assert and which shouldn't. |
Yeah I would work on full implementation but that's more work and tests and I wanted to get this working alone first. Also there was a Fortran implementation that I couldn't figure out how to get running. Might be better to F2py that and then modify this to use it? Bini's Fortran 77 implementation is here: http://www.netlib.org/numeralgo/na10 and Fortran 90 translation is here, called pzeros.f90: http://jblevins.org/mirror/amiller/#pzeros |
The hardcoded numbers are for the |
Yeah, no worries. It's good to start with a private implementation first and later move it to a public place. f2py should be the way to go, but the F77 source looks like it needs some cleanup. We can't use F90 code. |
The rule is simple in Scipy land: never any plain |
This happens in the magnitude normalization stage, which should really be broken up into analog SOS anyway (but that hasn't been implemented yet):
|
94268fd
to
101f8cd
Compare
Converting the polynomial to float doesn't lose accuracy in my tests, but converting the numerator to float loses some accuracy. Before:
After:
But that way it works in numpy 1.6.2 and still passes the tests. |
@endolith is this ready now that it passes the tests, or is there more to do? |
The |
Indeed, at the end. |
I'll leave the |
This is too long for an example, right?
It makes the 3 graphs on https://gist.github.com/endolith/3f74c4ec9ea623812cca |
That'll make 6 figures, not 3. The figures are quite nice. It's a bit long, but I think it is very helpful for users. I would suggest to keep 1 figure, because the ones for normalized phase/mag/delay are very similar. So if you choose one of those three and keep the code within the for-loop as one example, that's probably a good balance. |
So then I need to add a
Or, instead, create three "types" of bessel? So |
@@ master #5279 diff @@
======================================
Files 234 234
Stmts 43096 43129 +33
Branches 8154 8145 -9
Methods 0 0
======================================
+ Hit 33410 33439 +29
- Partial 2605 2607 +2
- Missed 7081 7083 +2
|
Instead of a list of hardcoded pole locations, the Bessel filter prototype is now calculated numerically, using root-finding methods, and provides the possibility of 3 different normalizations, which are found in different sources.
change array_equal to allclose and convert asserts to regular exceptions Numpy 1.6.2 failure: _norm_factor tested to be the same accuracy whether npp_polyval evaluates longs or floats up to order 148.
instead of "Nth order"
39a03ee
to
fa43c8f
Compare
Ok I added the |
Thanks @endolith |
it currently says |
It would be good to have the |
oh, ok. the sos parameter just has text in the notes. should I change that to .. versionadded:: 0.16.0 in the parameters section too? |
oh I see, |
and then I just found http://dx.doi.org/10.1109/TCT.1965.1082473 which might be a simpler/faster method for |
For #3763 (comment)
Instead of a list of hardcoded pole locations, the Bessel filter prototype is now calculated numerically, using root-finding methods, and provides the possibility of 3 different normalizations, which are found in different sources. Now it's possible to generate (completely impractical) 500th-order Bessel filters in <1 second.
_bessel_poly
and_aberth
could be expanded and used for other purposes, but I left them minimal and private for now. I think the process for Legendre filters is similar?The -3 dB normalization just evaluates the polynomial directly, and seems inaccurate at high orders. Probably breaking it up into 2nd-order polynomial sections would improve it? zpk2sos(analog-True) would do this in the future?
Should probably add a norm parameter to the
bessel()
function?