Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

3D soft_skel gives discontinuous input and different results on pytorch vs tensorflow #7

Closed
kretes opened this issue Jul 19, 2021 · 5 comments

Comments

@kretes
Copy link

kretes commented Jul 19, 2021

First of all - thanks for a great paper about clDice, it's really interesting approach.

I wanted to test the idea on 3D dataset.

I have a synthetic 3D shape on which I just run soft_skeletonize and I assume it should leave the same single connected component. unfortunately it doesn't. See the following summary showing the input image on the left, and two iterations of soft_skel - for both tensorflow and pytorch.

image

I've created a reproducible Colab notebook for the case: https://gist.github.com/kretes/84f6025e7e1ded19591a54b62abcc539

@jocpae
Copy link
Owner

jocpae commented Jul 20, 2021

Dear kretes,
thanks for raising this issue. Please note that our soft-skeletonization is always only an approximation which we implemented for continous data to be differentiable, please see our paper for more details. Moreover, you can choose different kernels as we did in the pytorch vs. Keras implementation. Along these lines we we can identify the problem for your particular case: The kernel and the object of interest are of the same resolution and they are perfectly orthoginal, this can be a problem which can be overcome using a smaller kernel. TRADEOFF: you would need to run more iterations. In your case a fitting kernel would be the following:

        p1 = -F.max_pool3d(-img,(2,1,1),(1,1,1),(1,0,0))[:,:,1:,:,:]
        p2 = -F.max_pool3d(-img,(1,2,1),(1,1,1),(0,1,0))[:,:,:,1:,:]
        p3 = -F.max_pool3d(-img,(1,1,2),(1,1,1),(0,0,1))[:,:,:,:,1:]
        return torch.min(torch.min(p1, p2), p3)

This will lead to the following result:
image (4)

@kretes
Copy link
Author

kretes commented Jul 21, 2021

Hello @jocpae , thanks for a quick response!

Indeed if I apply your change to the pytorch version + change as well the soft_dilate into a 2,2,2 kernel with:

return F.max_pool3d(img,(2,2,2),(1,1,1),(1,1,1))[:,:,1:,1:,:]

I get different results and no longer a discontinuity. However the results seem ok when applied for one iteration, but no longer so for more (in pytorch version they go back to original value, while I assume the more iterations I do I should stabilize at the minimal skeleton, i.e. it should never extend):
image

Do you know why is that? Have you made some other changes? Looking at your images I would say that the upper row is not the expected behaviour as it removes a large part, however the bottom row seems ok.

I've updated the gist used here: https://gist.github.com/kretes/550ac2b58260504fb1b586fe5bab7634

@jocpae
Copy link
Owner

jocpae commented Jul 28, 2021

Please see the comment in the gist, I hope this solves the issue :)

@jocpae jocpae closed this as completed Aug 2, 2021
@kretes
Copy link
Author

kretes commented Sep 9, 2021

@jocpae I've updated the gist and added my response in https://gist.github.com/kretes/550ac2b58260504fb1b586fe5bab7634#gistcomment-3867141 - Could you take a look at that?

@yuxinma0908
Copy link

The problem here was that soft_skel does not preserve the connectedness of this synthetic shape: the result has 2 connected components, with a small gap at the "joint".
iterations
If we plot the intermediate steps of the algorithms, we can see min_pool often creates a small gap in this circumstance. It seems to be a very common problem.
intermediate

@jocpae Do you think this affects clDice at all? Could you provide some insights on whether this problem should be fixed / how it could be fixed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants