You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
signal.correlate2d apparently uses a brute-force element-by-element convolution, whose computational cost goes with the square of the number of image pixels. I found that it doesn't complete even after many days when applied to a 4096x4096 image to obtain the 2D autocorrelation:
r = signal.correlate2d(image, image, mode='full', boundary='wrap')
But the autocorrelation can be obtained in less than a minute using np.fft.fft2 and np.fft.ifft2. I suggest that (1) the doc page for correlate2d() describe the method currently used and point out its unsuitability for larger images and (2) if possible in the future, either add an option to utilize an FFT approach for larger images or else point the user to other scipy routines that can handle the task more efficiently.
The text was updated successfully, but these errors were encountered:
can be obtained in less than a minute using np.fft.fft2 and np.fft.ifft2
Or using signal.correlate with method=fft.
correlate2d and convolve2d should be sped up to use the FFT method, too, with a method = {direct, fft, auto} parameter.
I thought there was an issue for this already, but I guess this is it. (Issue #13857 can be for adding boundary to convolve/correlate and this issue can be for adding method to correlate2d/convolve2d)
Basically it would need to pad the input arrays with 1 cycle of whatever the boundary condition is. So:
boundary = wrap: No padding, just circular FFT convolution
signal.correlate2d apparently uses a brute-force element-by-element convolution, whose computational cost goes with the square of the number of image pixels. I found that it doesn't complete even after many days when applied to a 4096x4096 image to obtain the 2D autocorrelation:
But the autocorrelation can be obtained in less than a minute using np.fft.fft2 and np.fft.ifft2. I suggest that (1) the doc page for correlate2d() describe the method currently used and point out its unsuitability for larger images and (2) if possible in the future, either add an option to utilize an FFT approach for larger images or else point the user to other scipy routines that can handle the task more efficiently.
The text was updated successfully, but these errors were encountered: