Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Bessel filters with different normalizations, high order #5279

Merged
merged 7 commits into from Mar 25, 2016

Conversation

endolith
Copy link
Member

For #3763 (comment)

Instead of a list of hardcoded pole locations, the Bessel filter prototype is now calculated numerically, using root-finding methods, and provides the possibility of 3 different normalizations, which are found in different sources. Now it's possible to generate (completely impractical) 500th-order Bessel filters in <1 second.

500th order poles

_bessel_poly and _aberth could be expanded and used for other purposes, but I left them minimal and private for now. I think the process for Legendre filters is similar?

The -3 dB normalization just evaluates the polynomial directly, and seems inaccurate at high orders. Probably breaking it up into 2nd-order polynomial sections would improve it? zpk2sos(analog-True) would do this in the future?

Should probably add a norm parameter to the bessel() function?

@ev-br
Copy link
Member

ev-br commented Sep 23, 2015

Test failures: might want to import xrange from scipy._lib.six for python 2/3 compat.

@endolith
Copy link
Member Author

Ok I don't understand the remaining errors.

AssertionError:
Arrays are not equal
(mismatch 50.0%)
x: array([ 1., 1.])
y: array([1, 1])

Are those floats not exactly 1? (They are on my machine)

and then in NumPy 1.6.2 it isn't possible to evaluate a polynomial made of long ints?

TypeError: unsupported operand type(s) for +: 'long' and 'numpy.complex128'

@endolith
Copy link
Member Author

I tried to use Anaconda to set up a python 2.6 environment but the tests all passed in that.
python 2.6.9
numpy 1.9.2
nose 1.3.7

@rgommers
Copy link
Member

@endolith for me assert_equal passes as well, but for floating point numbers it always depends on the system config (OS/compilers/etc). I think if you just change the comparison to np.testing.assert_allclose(..., rtol=1e-15) it'll work and be fine. As a general rule, never compare floating point numbers for exact equality.

@rgommers
Copy link
Member

I've tested this a bit and the speed penalty is about 5x-30x for reasonable size filters. Which given that it takes order 50us - 10ms to construct one with this PR is not an issue imho. You could have kept the hardcoded numbers and added the new method only for orders for which no hardcoded numbers are available. But this works and is ultimately clearer. So keep as is I'd say.

@rgommers rgommers added enhancement A new feature or improvement scipy.signal labels Oct 10, 2015

x += alpha / (1 + alpha * beta)

assert all(np.isfinite(x)), 'Root-finding calculation failed'
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be stripped from the code when run with python -O, please don't use plain assert. I suggest using

if not all(np.isfinite(x)):
    raise RuntimeError('Root-finding calculation failed')

@rgommers
Copy link
Member

I suspect that a full implementation of Bini's algorithm may be of wider interest, not sure where though. @charris maybe for numpy.polynomial (see https://en.wikipedia.org/wiki/Aberth_method)?

@charris
Copy link
Member

charris commented Oct 10, 2015

@rgommers I'd certainly be open to an algorithm that could improve on the companion matrix approach, which I suspect is not at it's best for the polynomial case. If nothing else, having an implementation would make it available for others to borrow.

@endolith
Copy link
Member Author

I think if you just change the comparison to np.testing.assert_allclose(..., rtol=1e-15) it'll work and be fine. As a general rule, never compare floating point numbers for exact equality.

Well the coefficients should be exactly 1 for a passthrough, and that can be exactly represented in floating point, so is it failing because it's producing a number very close to 1 or it just doesn't consider them equal?

@endolith
Copy link
Member Author

This will be stripped from the code when run with python -O, please don't use plain assert

Well assert is for catching programming errors which are stripped out for speed, right? I'm not sure which tests should use assert and which shouldn't.

@endolith
Copy link
Member Author

I suspect that a full implementation of Bini's algorithm may be of wider interest

Yeah I would work on full implementation but that's more work and tests and I wanted to get this working alone first. Also there was a Fortran implementation that I couldn't figure out how to get running. Might be better to F2py that and then modify this to use it?

Bini's Fortran 77 implementation is here: http://www.netlib.org/numeralgo/na10

and Fortran 90 translation is here, called pzeros.f90: http://jblevins.org/mirror/amiller/#pzeros

@endolith
Copy link
Member Author

You could have kept the hardcoded numbers and added the new method only for orders for which no hardcoded numbers are available

The hardcoded numbers are for the phase normalization, though, and the iterative approach produces the delay normalization, then converts that to the other 2 types if necessary.

@rgommers
Copy link
Member

Yeah I would work on full implementation but that's more work and tests and I wanted to get this working alone first. Also there was a Fortran implementation that I couldn't figure out how to get running. Might be better to F2py that and then modify this to use it?

Yeah, no worries. It's good to start with a private implementation first and later move it to a public place.

f2py should be the way to go, but the F77 source looks like it needs some cleanup. We can't use F90 code.

@rgommers
Copy link
Member

This will be stripped from the code when run with python -O, please don't use plain assert

Well assert is for catching programming errors which are stripped out for speed, right? I'm not sure which tests should use assert and which shouldn't.

The rule is simple in Scipy land: never any plain asserts anywhere.

@pv pv added the needs-work Items that are pending response from the author label Nov 15, 2015
@endolith
Copy link
Member Author

numpy.__version__
Out[8]: '1.10.1'

l = 12345678901234567890

n = numpy.complex128(1+1j)

l+n
Out[11]: (1.2345678901234569e+19+1j)
In [4]: numpy.__version__
Out[4]: '1.6.2'

In [5]: l = 12345678901234567890

In [6]: n = numpy.complex128(1+1j)

In [7]: l+n
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-7-44a1ceccc27e> in <module>()
----> 1 l+n

TypeError: unsupported operand type(s) for +: 'long' and 'numpy.complex128'

This happens in the magnitude normalization stage, which should really be broken up into analog SOS anyway (but that hasn't been implemented yet):

def G(w):
        """
        Gain of filter
        """
        return abs(a[-1]/npp_polyval(1j*w, a[::-1]))

@endolith
Copy link
Member Author

Converting the polynomial to float doesn't lose accuracy in my tests, but converting the numerator to float loses some accuracy. Before:

_norm_factor(_bessel_poly(148, True))
Out[75]: 14.291128556394179

After:

_norm_factor(_bessel_poly(148, True))
Out[79]: 14.291128556270062

But that way it works in numpy 1.6.2 and still passes the tests.

@rgommers
Copy link
Member

@endolith is this ready now that it passes the tests, or is there more to do?

@endolith
Copy link
Member Author

The besselap function is ready, but I was going to add a norm parameter to bessel also. At the end for backwards compatibility?

@rgommers
Copy link
Member

The besselap function is ready, but I was going to add a norm parameter to bessel also. At the end for backwards compatibility?

Indeed, at the end.

@rgommers
Copy link
Member

I'll leave the needs-work label then - please ping when it's ready for review.

@endolith
Copy link
Member Author

endolith commented Jan 1, 2016

This is too long for an example, right?

from scipy.signal import freqs, zpk2tf
import matplotlib.pyplot as plt

for norm in ('delay', 'phase', 'mag'):

    plt.figure(norm)
    plt.suptitle(norm + ' normalized')

    for N in (1, 2, 3, 4):
        z, p, k = besselap(N, norm)
        b, a = zpk2tf(z, p, k)
        w, h = freqs(b, a, np.logspace(-3, 3, 1000))
        plt.subplot(3, 1, 1)
        plt.semilogx(w, 20*np.log10(h))

        plt.subplot(3, 1, 2)
        plt.semilogx(w[1:], -np.diff(np.unwrap(np.angle(h)))/np.diff(w))

        plt.subplot(3, 1, 3)
        plt.semilogx(w, np.unwrap(np.angle(h)))

    plt.subplot(3, 1, 1)
    plt.title('Magnitude')
    plt.ylabel('dB')
    plt.axvline(1, color='red')
    plt.ylim(-40, 3)
    plt.grid(True, color='0.7', linestyle='-', which='major')
    plt.grid(True, color='0.9', linestyle='-', which='minor')

    plt.subplot(3, 1, 2)
    plt.title('Group delay')
    plt.ylabel('seconds')
    plt.axvline(1, color='red')
    plt.ylim(0, 3.5)
    plt.grid(True, color='0.7', linestyle='-', which='major')
    plt.grid(True, color='0.9', linestyle='-', which='minor')

    plt.subplot(3, 1, 3)
    plt.title('Phase')
    plt.ylabel('radians')
    plt.axvline(1, color='red')
    plt.ylim(-2*np.pi, 0)
    plt.grid(True, color='0.7', linestyle='-', which='major')
    plt.grid(True, color='0.9', linestyle='-', which='minor')

plt.figure('mag')
plt.subplot(3, 1, 1)
plt.axhline(-3.01, color='red', alpha=0.5)

plt.figure('delay')
plt.subplot(3, 1, 2)
plt.axhline(1, color='red', alpha=0.5)

plt.figure('phase')
plt.subplot(3, 1, 3)
plt.axhline(-np.pi, color='red', alpha=0.5)
plt.axhline(-3*np.pi/4, color='red', alpha=0.5)
plt.axhline(-np.pi/2, color='red', alpha=0.5)

It makes the 3 graphs on https://gist.github.com/endolith/3f74c4ec9ea623812cca

@rgommers
Copy link
Member

rgommers commented Jan 1, 2016

That'll make 6 figures, not 3. The figures are quite nice. It's a bit long, but I think it is very helpful for users. I would suggest to keep 1 figure, because the ones for normalized phase/mag/delay are very similar. So if you choose one of those three and keep the code within the for-loop as one example, that's probably a good balance.

@endolith
Copy link
Member Author

endolith commented Jan 6, 2016

So then I need to add a norm parameter to iirfilter, too, which is only used for Bessel filters?

iirfilter(N, Wn, rp=None, rs=None, btype='band', analog=False, ftype='butter', output='ba', norm='phase'):

Or, instead, create three "types" of bessel?

So iirfilter(ftype='bessel') is equivalent to iirfilter(ftype='bessel_phase') and then there are also iirfilter(ftype='bessel_delay') and iirfilter(ftype='bessel_mag')? Or since the ftype are strings, it could be something like iirfilter(ftype='bessel (delay)').

@codecov-io
Copy link

@@            master   #5279   diff @@
======================================
  Files          234     234       
  Stmts        43096   43129    +33
  Branches      8154    8145     -9
  Methods          0       0       
======================================
+ Hit          33410   33439    +29
- Partial       2605    2607     +2
- Missed        7081    7083     +2

Review entire Coverage Diff as of 56fac6c

Powered by Codecov. Updated on successful CI builds.

Instead of a list of hardcoded pole locations, the Bessel filter
prototype is now calculated numerically, using root-finding methods, and
provides the possibility of 3 different normalizations, which are found
in different sources.
change array_equal to allclose and convert asserts to regular exceptions

Numpy 1.6.2 failure: _norm_factor tested to be the same accuracy whether
npp_polyval evaluates longs or floats up to order 148.
@endolith
Copy link
Member Author

endolith commented Jan 8, 2016

Ok I added the norm= parameter, please review it.

@pv pv removed the needs-work Items that are pending response from the author label Feb 17, 2016
@larsoner
Copy link
Member

Changes LGTM. @rgommers feel free to look if you have time, otherwise I'll merge in a couple of days. @endolith if I somehow forget, please remind me.

@larsoner larsoner merged commit 705b709 into scipy:master Mar 25, 2016
@larsoner
Copy link
Member

Thanks @endolith

@endolith
Copy link
Member Author

it currently says The 'sos' output parameter was added in 0.16.0. Should I add The 'norm' parameter was added in 0.18.0?

@larsoner
Copy link
Member

It would be good to have the .. versionadded:: directive in the docstring under that parameter, yeah.

@endolith
Copy link
Member Author

oh, ok. the sos parameter just has text in the notes. should I change that to .. versionadded:: 0.16.0 in the parameters section too?

@endolith
Copy link
Member Author

oh I see, sos is an option, not an entire parameter

@endolith
Copy link
Member Author

and then I just found http://dx.doi.org/10.1109/TCT.1965.1082473 which might be a simpler/faster method for _bessel_zeros() :)

@ev-br ev-br added this to the 0.18.0 milestone Mar 26, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement A new feature or improvement scipy.signal
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants