-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IIR filter design should not transform to tf representation internally #2443
Comments
Looks to me like most functions in filter_design take only |
I mean that only accepting tf input is probably a bad design choice, since tf introduces numerical errors:
so these should be using something more stable like zpk or ss internally. These are for Matlab, but I think the issues are the same here:
.
|
So there are two things here:
Added a defect and enhancement label. |
Hi guys - I found this thread after discussing a similar issue in #2980. There is a function in scipy.signal cont2discrete which looks like it could be used instead of writing a new version of bilinear. As far as the cutoff transformation functions go - I need to check into this more, but - I'm not sure if they need to be rewritten. In the specific example of #2980, based on my investigation of it, the filter only becomes unstable after the bilinear transform. The analog 'ba' coefficients going into and out of lp2lp are stable. The digital 'ba' coeffs are unstable, after the transformation. @endolith, do you know if analog transfer function coefficients are subject to the same stability issues as discrete filter coefficients? My intuition says they may not be because the stable region of the s plane is the entire left half plane, rather than just the inside of the unit circle on the z plane, but I could be totally off base there. If we don't run into the same issues, we could just transform the analog 'ba' coeffs after cutoff transformation to state space or zpk and then use cont2discrete for the final transformation to discrete. Otherwise there would be more work to do and some decisions to make, as you mentioned. |
This is about numerical stability, meaning that a calculation that should ideally produce the same output as input ( "The left half of the S plane" is dealing with filter stability, (I think BIBO stability is the right link), which is an independent issue (though numerical instability can corrupt stable filters into unstable filters). The problem is |
Yeah I worded that poorly. Basically I was using filter stability as a proxy for acceptable numerical stability. Let me rephrase what I was trying to say. Does numerical instability affect the performance of a filter in the s-domain as much as it does in the z-domain? That is, for the same absolute error in coefficient values, will the effect on filter performance be different for an analog vs a digital filter. My thought was that it might be different since the s-plane and z-plane are slightly different in the way they represent complex numbers. In the z domain our coeffs multiply e^st while in the s plane they multiply s directly. And my further thought off that was if we can accept more numerical error in the s domain, the analog prototypes in iirdesign might be ok, and only the transformation to a discrete filter would need to be changed. I still might not be right about that, but hopefully conveys my idea a little better. |
The filter doesn't become unstable in the sense that it's no longer BIBO stable, but rather the numerical representation of the filter becomes wrong as a result of accumulated numerical errors during transformation. I don't believe you can use BIBO stability of the output of the bilinear transform as a proxy for the numerical accuracy of the transform. BIBO stability just tells us that the filter won't blow up over time. http://en.wikipedia.org/wiki/BIBO_stability In order to be BIBO stable the filter needs to have the ROC include the unit circle, and thus all the poles need to be inside the unit circle. http://en.wikipedia.org/wiki/BIBO_stability#Discrete-time_signals The numerical accuracy of the transform has to do with how precisely the discrete time outputs represent the continuous time inputs. So long as the numerical problems don't cause the poles to migrate outside of the unit circle, things are fine. From what I can gather the problem isn't that the poles are moving outward but rather that they're moving inward. Or perhaps from inside to further inside, due to numerical problems. Matlab has a way to visualize pole/zero plots: http://www.mathworks.com/help/signal/ref/zplane.html Here's someone who has created a way to look at the zplane plot of filters: http://scipy-central.org/item/52/0/zplane-function I am going to do some digging into the filters I was playing around with in #2980 and plot them to see if that makes it easier to see what's going on. My guess is that there will be substantial differences in the pole-zero plots of the filters which seemed well-behaved and those that seemed broken. I'll post some pictures once I have them. |
I have a Z plane plotter here too: https://gist.github.com/endolith/4625838 The numerical problems lead to the zeros being "exploded" into a circle around -1 rather than being exactly on -1 ( |
@MikeSandford, I'm responding to your other comment in #2980 here as well to keep this in one conversation. I agree that the the example works if you move the cutoff frequency. However, the original filter is definitely BIBO unstable, and this is caused by numerical instability in the calculation of the coefficients. Check the poles of the filter - the roots of the polynomial defined by the a coefficients. Several of them have a magnitude greater than 1. You can also see the instability in the time domain output which is blowing up. In the case of increased cutoff frequency, the numerical instability isn't enough to cause the filter to become unstable, or to degrade it's performance that much. The narrower the bandwidth of the filter, the more sensitive it is to coefficient values. I think the problem is that the roots are moving outside the unit circle due to numerical instability. You can visualize the poles of your filter by plot(real(roots(a)), imag( roots(a))). And abs(roots(a)) will tell you if any are outside the unit circle. Here are my computed roots of the original filter in #2980. And for cutoff frequency = 0.4 Do you get the same result? |
It's not the domain, it's the filter representation. zpk works better than tf, whether analog or digital. When you convert zpk or ss into tf representation, it requires some of the coefficients to become very small and lose precision. The way to fix this is to not convert to tf representation unless the user specifically asks for it. |
Ok, point taken. It definitely better to never go to TF form if not asked to. |
Okay so I've run some examples here similar to those in #2980 First the original (or close to it), lowpass filter, sample rate 20, passband to 0.1, stopband at 0.12. It's real bad. Next, stopband at 0.15. A bit better. It's still attenuating heavily but it is producing some kind of output that has the right shape. Then moving the stopband out to 0.25 and it's pretty well behaved. Now let's move the passband up to 0.30 and the stop-band to 0.45 and it's only OK. Still has attenuation issues. But give it some more room by moving the stopband to 0.58, and suddenly things are cool. Okay, so now let's get a lot closer to Nyquist as these are all pretty extreme examples of lowpass filters. Here we've set the passband up to 1.5 and the stopband to 1.8. With a sample rate of 20, Nyquist is at 10 and we're filtering out the highest frequency quite effectively. Now to break it again, move the stop-band from 1.8 down to 1.65 and thar she blows. I think the reason I'm posting all these examples is that many of these filters aren't all that crazy. Yeah some of them are a bit sharp but there are plenty of situations where that kind of thing is needed. From what I can tell the problems seem to be related to how fast you need to move from passband to stopband AND how close your transition region is to the Nyquist rate. Moving the transition zone closer to the Nyquist rate seems to allow for a steeper transition without breaking things. |
@PMcPherson I think you're right about the original example. The plots make it abundantly clear on that point. The numerical instability of something along the way causes the BIBO instability of the filter. I think some of the examples that I've posted show that as well. I would tend to disagree with your statement about narrower filter bandwidths causing the instability though. I made a filter that had relatively wide bandwidth (compared to the others) but gave it a small transition region and it still had problems. I think that when you have a really steep roll-off relative to your distance from the Nyquist frequency is what pushes things from works OK to definitely broken. |
@MikeSandford, you're right, it's not only filter bandwidth. Higher filter order makes the filter more sensitive to numerical error too. Narrow filter bandwidth (for a low pass) means you are far from the nyquist frequency. And small transition area corresponds to high order filters. A filter performing poorly due to numerical error will get better as either of those is relaxed. I think we are starting to say the same thing.
In practice, you can usually get around these problems by implementing your filter as multiple second order filters in series with each other, instead of one single stage high order filter. However, in scipy you can't, because it internally converts to single stage transfer function form. So even if you request zpk as the output form, in the hope of computing a second order sections form, the damage has already been done. That's what this issue wants to resolve. |
@PMcPherson For my purposes switching from an IIR filter to an FIR filter solved the problem and I don't mind the time delay. Having stable phase delay is nice too. But there were a few days of utter frustration because I couldn't figure out why the IIR filters weren't behaving like they were supposed to. I nearly lost my mind (only stretching a bit). I'm chiming in here mostly to make sure that people know it's actually a real problem. I'm not sure I can contribute much to the solution but I think I'm able to help point out specifically some of the symptoms. |
So I'm thinking
Internally it would convert to whichever is stablest, do the transformation, and then convert back to the same format for output? Then you could work with the filter coefficients as a single (list of coefficients) object:
These are public functions, though, so it would have to break backwards compatibility to change in this way? ... Actually, it looks like Also things like |
Oh, you're absolutely right about this. |
Here's an idea to preserve backwards compatibility of the transform functions. They could be rewritten as |
Actually cont2discrete won't work either, since it uses ss2zpk, and ss2zpk converts to tf representation internally :) so I made a zpkbilinear function instead |
Awesome! Nice work. |
Given that the current and previous releases of I haven't thought too hard about how difficult it will be to make a generalize-able stability check. If adding such a check is a no-go, it might be worth putting a warning into the docstring that users should verify their filter numerical stability, even when using something like |
Re: stability warning. The previous versions of Scipy seem to raise |
Actually, let's close this issue as a defect and open a separate one for accepting other input types in all |
@endolith @Eric89GXL maybe you want to open yet another issue for the SOS enhancement? |
You mean something like #2444? :) |
No, it would be about the function parameters, like how lp2lp takes only b and a as arguments, but should accept zpk format somehow. I made the replacements private functions because I didn't know how to change the interface. For example, signal.step takes a system as the first argument instead of having separate b and a arguments, so |
Oh wait he already made it in #3259 |
@Eric89GXL indeed, I forgot that that issue for SOS already existed. With the new one I opened for the |
The
iirfilter()
function internally generates filter prototypes in zpk format, transforms them to tf format, and then back to zpk format:But conversion to tf format introduces numerical errors, causing higher-order filters to fail, even though the higher-order prototypes work fine. It should use zpk format (or state-space format?) throughout, and these functions should be changed to accept zpk format or be replaced by ones that do.
I started to implement this, but then realized it involves lots of changes to lots of functions, some of which I'm not sure the best way to do, so registering this issue in case others want to work on it, too.
For instance, matlab's bilinear function accepts tf, zpk, and ss formats, by varying the number of input parameters, which I don't think is "Pythonic" (though
lti()
does it). SciPy's bilinear function acceptsb
anda
as separate parameters, and thenfs
, so it can't be modified to use variable number of input variables anyway, without breaking compatibility. A new function could be created that takes zpk, or the existing function could take a list as its first parameter instead, which could be zpk, tf, or ss, or something else.For comparison, Octave uses sftrans instead of lp2lp, and only accepts zpk format.
The text was updated successfully, but these errors were encountered: