Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification of the input data X and Y for Centreline prediction #7

Open
Leandroscholz opened this issue Jan 18, 2020 · 0 comments
Open

Comments

@Leandroscholz
Copy link

Leandroscholz commented Jan 18, 2020

Hi @giesekow congratulations for the work! DeepVesselNet appears to be a great tool and I am very excited to test it! I am just starting to study and understand Deep Learning tools and I would like to ask more clarification about the inputs of DVN so I could use if for centreline prediction.

I understood that X are the raw images and Y are annotated images (Ground Truth) but it is not clear what the dimensions depict. In your example.py :

X.shape
(10,1,64,64)
Y.shape]
(10,2,64,64) 

If I wanted to train the network with 3D images (say n=100 images) of 124x124x124 px and Y centreline annotations (Assuming then X.shape (100,1,124,124,124)), what steps do I need to take? Your paper in arXiv tells the DVN has to be trained on the binary mask first because it uses the networks use the probabilistic segmentation masks and it is not clear what to do after training the DVN-FCN for vessel segmentation.

Also, do you happen to have a snapshot of the trained model for DVN-FCN with the Synthetic Dataset provided?

Cheers,
Leandro S.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant