New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
scipy.signal.fftconvolve is not commutative #9941
Comments
Is there a good argument for why it should behave in any of these particular ways? |
Mathematically, convolution is a commutative operation, so it would make sense for Anecdotally, I first noticed this working on a SDR project with I know my use case was relatively simple, but I don't see how a warning could hurt and, in my case, it would have helped. |
It is commutative, though:
But
This is typically used for things like image filtering, where you want the output to be the same dimensions as the input, and unshifted (if the kernel is symmetrical). So if the kernel is identity-like, the output should be unchanged/unshifted:
It is still commutative if the inputs are both the same shape:
But when the inputs are not the same shape, how would you change this so that the truncated output is commutative but also matches the shape of the first input? That doesn't seem possible to me? |
I agree that one could not make it match the first input and have it be commutative. While I agree that the current definition is convenient some filtering applications, I am more accustomed to "same" indicating a convolution size of Moreover, I think there is still a good argument for maintaining a more uniform interface between |
Just because you're used to it from some other software? Or is there some actual benefit to that method, over just matching the shape of the first input in a predictable way? I don't understand why it would be desirable to change behavior based on the input shape. Also, which dimension should we be comparing when the input is N-dimensional?
But not always, right?
Wouldn't it make more sense to post this on numpy, since their behavior is more likely to be unexpected? Scipy's behavior is consistent with Matlab's
I wouldn't be opposed to adding another mode for compatibility purposes, but I'm not sure how the ND case would be handled.
Don't those already behave consistently? |
Actually I was wrong, Matlab does always copy the shape of the first input, but the center of the output can be offset by one relative to SciPy:
While SciPy outputs
Might be worthwhile to have a |
I wouldn't by any means call the numpy method unpredictable but I do see how it deviates from Matlab and I know many people still use Matlab for prototyping. However, I see now that I was mistaken in If there is interest, I could make up a PR for "same_numpy" mode, but it seems like that may be cluttering up the interface without a significant benefit. We also may consider adding a note to the scipy documentation explicitly calling out that "same" numpy and scipy convolution aren't always equivalent. I would argue that since scipy is built on top of numpy, it makes more sense to have such a note in scipy than numpy. I think you have found an interesting bug in Matlab / Scipy "same" compatibility that likely should be resolved (I think the inclusion of a new mode seems reasonable). |
Documentation is definitely a good idea. I'd probably be supportive of other compatibility modes, too, It's always nice to be able to just use something and trust that someone else already figured out the corner cases and it will just work like it's supposed to. Not sure how to name them though. |
The difference only appears when the size of the second argument is even, otherwise, MATLAB and SciPy give the same result. In the even case, the center is ambiguous, but it's not in the odd case. I think this should definitely be in the documentation, as this is the convolution routine most people porting from MATLAB use since it is the only one that offers the 'same' mode. When I read the above comment, I was a little scared because I had ported a lot of code a long time ago from MATLAB, but fortunately, all of my kernels are odd sized. Assuming the difference is not considered a bug, I don't think a compatibility mode is necessary. If you need the same output, MATLAB compatibility can easily be achieved by post-padding your even filter dimensions with zeros, like so: import numpy as np
from scipy.signal import convolve as conv
def pad_to_odd(kernel):
return np.pad(kernel, [(0,1) if dim%2==0 else (0,0) for dim in kernel.shape], mode='constant')
u = np.array([-1, 2, 3, -2, 0, 1, 2, 1])
v = np.array([2, 4, -1, 1])
w = conv(u,pad_to_odd(v),mode='same')
w
array([15, 5, -9, 7, 6, 9, 3, 1]) which is what MATLAB gives u = [-1 2 3 -2 0 1 2 1];
v = [2 4 -1 1 ];
w = conv2(u,v,'same')
w =
15 5 -9 7 6 9 3 1 |
But it's even easier to just use a compatibility mode that handles it for you, and you don't have to worry about making mistakes in your implementation. |
My vote would be just to document the difference rather than adding a new mode |
+1 There is already a lot of confusion even in matlab funcs about this. |
scipy.signal.fftconvolve is not commutative in 'same' mode with input arrays of different lengths. It mentions this in the documentation (see below), but it seems like swapping the input arrays, warning the user, or erroring would also be appropriate.
scipy.signal.fftconvolve Documentation:
Reproducing code example:
Error message:
Scipy/Numpy/Python version information:
The text was updated successfully, but these errors were encountered: