Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update edges.py #5552

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Update edges.py #5552

wants to merge 1 commit into from

Conversation

themnvrao76
Copy link

@themnvrao76 themnvrao76 commented Aug 31, 2021

This fixes #5491 and the main problem was a function only generating kernel size of 3x3 now I added more kernel_size now kernel size can grow up to 31x31 all odd numbers this issue is all about laplacian operator for edge detection

Description

Checklist

For reviewers

  • Check that the PR title is short, concise, and will make sense 1 year
    later.
  • Check that new functions are imported in corresponding __init__.py.
  • Check that new features, API changes, and deprecations are mentioned in
    doc/release/release_dev.rst.

This is issue scikit-image#5491 and the main problem was a function only generating kernel size of 3x3 now I added more kernel_size now kernel size can grow up to 31x31 all odd numbers this issue is all about laplacian operator for edge detection
@pep8speaks
Copy link

Hello @themnvrao76! Thanks for opening this PR. We checked the lines you've touched for PEP 8 issues, and found:

Line 17:42: E231 missing whitespace after ','
Line 688:17: E225 missing whitespace around operator
Line 690:17: E228 missing whitespace around modulo operator
Line 690:19: E225 missing whitespace around operator
Line 692:17: E225 missing whitespace around operator
Line 693:18: E225 missing whitespace around operator
Line 693:21: E231 missing whitespace after ','
Line 693:24: E231 missing whitespace after ','
Line 696:44: E226 missing whitespace around arithmetic operator
Line 696:47: E231 missing whitespace after ','
Line 696:50: E231 missing whitespace after ','
Line 696:53: E231 missing whitespace after ','
Line 698:44: W291 trailing whitespace
Line 700:44: E231 missing whitespace after ','
Line 704:16: E226 missing whitespace around arithmetic operator
Line 704:30: E226 missing whitespace around arithmetic operator
Line 705:1: W293 blank line contains whitespace
Line 707:6: E225 missing whitespace around operator
Line 707:46: E251 unexpected spaces around keyword / parameter equals
Line 707:48: E251 unexpected spaces around keyword / parameter equals
Line 708:6: E225 missing whitespace around operator
Line 708:18: E231 missing whitespace after ','
Line 708:20: E231 missing whitespace after ','
Line 709:1: W293 blank line contains whitespace

Comment on lines +703 to +704
a = np.zeros((ksize, ksize))
a[int(ksize/2), int(ksize/2)] = 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code is 2D only. For nD support it would need to be

a = np.zeros((ksize,) * image.ndim)
a[(ksize // 2,) * image.ndim] = 1

However, I Don't think you need the variables a or x at all.

I think you can just apply generic_laplace directly to image. This will be more efficient than forming a n-dimensional kernels.

@grlee77
Copy link
Contributor

grlee77 commented Aug 31, 2021

Using generic_laplace for ksize=5, would basically be doing the following in 2D:

output = ndi.convolve1d(image, deriv_kernel, axis=0) + ndi.convolve1d(image, deriv_kernel, axis=1)

which should be more efficient than explicitly forming the equivalent 2D kernel and calling convolve once with that.

Note that it is also different than chained 1D convolution like:

output = ndi.convolve1d(ndi.convolve1d(image, deriv_kernel, axis=0), deriv_kernel, axis=1))

Comment on lines +707 to +708
x=[generic_laplace(a, d2, extra_arguments = (k_size(ksize),))]
x=np.repeat(x,3,0)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What was the purpose of this repeat by 3 along the first axis? Why do we want a kernel of size (3, ksize, ksize)?

@grlee77
Copy link
Contributor

grlee77 commented Aug 31, 2021

This is a good start, but will need a bit of revision before it is ready to merge. I think what OpenCV's Laplacian does is substantially different than this and we need to decide the approach we want to use.

Thanks for taking this on @themnvrao76!

@grlee77
Copy link
Contributor

grlee77 commented Aug 31, 2021

Actually, on closer inspection OpenCV's cv2.Laplacian function, the kernels look like this

import cv2
import numpy as np

for ksize in [1, 3, 5, 7, 9]:
    if ksize == 1:
        # Special case in OpenCV (ksize=1 gives a different 3x3 kernel)
        a = np.zeros((3, 3), dtype=np.uint8)
        a[1, 1] = 1
    else:
        a = np.zeros((ksize, ksize), dtype=np.uint8)
        a[ksize // 2, ksize // 2] = 1
    kernel = cv2.Laplacian(a, cv2.CV_16S, ksize=ksize, borderType=cv2.BORDER_CONSTANT)
    print(f"kernel (ksize={ksize})=\n{kernel}\n")

gives

kernel (ksize=1)=
[[ 0  1  0]
 [ 1 -4  1]
 [ 0  1  0]]

kernel (ksize=3)=
[[ 2  0  2]
 [ 0 -8  0]
 [ 2  0  2]]

kernel (ksize=5)=
[[  2   4   4   4   2]
 [  4   0  -8   0   4]
 [  4  -8 -24  -8   4]
 [  4   0  -8   0   4]
 [  2   4   4   4   2]]

kernel (ksize=7)=
[[   2    8   14   16   14    8    2]
 [   8   24   24   16   24   24    8]
 [  14   24  -30  -80  -30   24   14]
 [  16   16  -80 -160  -80   16   16]
 [  14   24  -30  -80  -30   24   14]
 [   8   24   24   16   24   24    8]
 [   2    8   14   16   14    8    2]]

For all cases except the special case of ksize=1, this turns out to be equivalent to:

1.) getting separable 0th and 2nd order difference kernels for a given ksize

ksize = 5
deriv0, deriv2 = cv2.getDerivKernels(0, 2, ksize)
deriv0 = np.squeeze(deriv0)
deriv2 = np.squeeze(deriv2)
print(f"0th derivative, ksize={ksize} : {deriv0}")
print(f"2nd derivative, ksize={ksize} : {deriv2}")
0th derivative, ksize=5 : [1. 4. 6. 4. 1.]
2nd derivative, ksize=5 : [ 1.  0. -2.  0.  1.]

and then
2.) applying these as follows:

a = np.zeros((ksize, ksize))
a[ksize // 2, ksize // 2] = 1
# smooth along axis 0, second derivative along axis 1
out = ndi.convolve1d(ndi.convolve1d(a, deriv0, axis=0), deriv2, axis=1)
# smooth along axis 1, second derivative along axis 0
out += ndi.convolve1d(ndi.convolve1d(a, deriv2, axis=0), deriv0, axis=1)

Here out will be the be the same as the ksize=5 case generated by calling cv2.Laplacian directly

The difference between cv2.Laplacian and scipy.ndimage.generic_laplace is that the SciPy function does not do any smoothing along the orthogonal axis when applying the second derivatives. We need to decide which behavior we want to replicate before we can proceed with this PR.

param=[1,-2,1]
return param
else:
return np.convolve(k_size(ksize-2),[1,-2,1])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repeated convolution with [-1, 2, -1] was a mistake in my initial description of the issue. Each time we repeat the convolution we are taking another two derivatives! What we want is a ksize (central) finite different approximation to the 2nd derivative.

There are some specific cases (and a general algorithm) listed here:
https://en.wikipedia.org/wiki/Finite_difference_coefficient#Central_finite_difference
but OpenCV seems to be using a different approximation. We don't necessarily need to match OpenCV, though.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay thanks i will do more research and change it

@rfezzani rfezzani added the 🩹 type: Bug fix Fixes unexpected or incorrect behavior label Oct 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🩹 type: Bug fix Fixes unexpected or incorrect behavior
Projects
None yet
Development

Successfully merging this pull request may close these issues.

ksize has no effect in skimage.filters.laplace
4 participants