Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Strange dice coefficient on volume-54.nii #5

Open
mitiandi opened this issue Oct 29, 2018 · 23 comments
Open

Strange dice coefficient on volume-54.nii #5

mitiandi opened this issue Oct 29, 2018 · 23 comments

Comments

@mitiandi
Copy link

I randomly split the train dataset (131 cases) to two no-overlap subset, which are train set (105 cases) and test (26 cases) set. When I finshed train the network and test it on the test set (26 cases). I obtained a result that dice per case is 0.932. It is lower than your result (0.957).
Most importantly, I found that the dice coefficient on volume-54.nii is very poor. (0.18). Then I visualized the segmentation result of volume-54.nii and compared it to its ground truth. And then I found there were some dislocation about them.(about 10 slices). For example, segmentation result started with 62th slice,while ground truth started with 52th slice.

@mitiandi
Copy link
Author

And there is another question that confused me. It is that 'epoch=3000' is used to trained the network. But I found the network tended to convergence at a quite early time.(May be it seems before 1000 epoch.)

@ahmadmubashir
Copy link

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

@ahmadmubashir
Copy link

And from which file of code you are doing testing?

@mitiandi
Copy link
Author

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

As the author did, I used the 256×256×48 3d patches as the input of the network. And the patches were obtained by 'data_prepare/get_random_data.py' and 'dataset/data_random.py'. The former was used to pre-process the train data, including down sampling on the xy plane and getting slices which contain liver (20 slices were expanded in the positive and negative directions along the Z axis) while the latter was used to randomly extract 48 continuous slices based on the results of the former. The latter's results were directly used as the input of the network.

@mitiandi
Copy link
Author

And from which file of code you are doing testing?

val.py

@ahmadmubashir
Copy link

ahmadmubashir commented Oct 30, 2018

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

As the author did, I used the 256×256×48 3d patches as the input of the network. And the patches were obtained by 'data_prepare/get_random_data.py' and 'dataset/data_random.py'. The former was used to pre-process the train data, including down sampling on the xy plane and getting slices which contain liver (20 slices were expanded in the positive and negative directions along the Z axis) while the latter was used to randomly extract 48 continuous slices based on the results of the former. The latter's results were directly used as the input of the network.

I did the above steps. the first code gave me 256×256×n image. but it not gave me 256×256×48 3d patches. Could I get it manually? because the output of 'data_prepare/get_random_data.py', I understand. But the second code dont gave me exactly 256×256×48 3D patch. 'dataset/data_random.py'. is used in train_ds.py as from dataset.dataset_random import train_ds
Is this make the 256×256×48 3D patches automatically or we will manually make these samples?
one another issue I found that the size of volume after 'data_prepare/get_random_data.py', I obtained is 256×256×n and the size of its ground truth is 512×512×n, why?
please help me. Thanks

@mitiandi
Copy link
Author

Please tell me that you are giving the whole volume for training or split it to 3D patches? Please confirm it to me. Thanks.

As the author did, I used the 256×256×48 3d patches as the input of the network. And the patches were obtained by 'data_prepare/get_random_data.py' and 'dataset/data_random.py'. The former was used to pre-process the train data, including down sampling on the xy plane and getting slices which contain liver (20 slices were expanded in the positive and negative directions along the Z axis) while the latter was used to randomly extract 48 continuous slices based on the results of the former. The latter's results were directly used as the input of the network.

I did the above steps. the first code gave me 256×256×n image. but it not gave me 256×256×48 3d patches. Could I get it manually? because the output of 'data_prepare/get_random_data.py', I understand. But the second code dont gave me exactly 256×256×48 3D patch. 'dataset/data_random.py'. is used in train_ds.py as from dataset.dataset_random import train_ds
Is this make the 256×256×48 3D patches automatically or we will manually make these samples? thank. please help me.
The former is correct.The 3d patches were not saved. The data was automatically organized and load in the form of 256×256×48 patches , and its implementation is as followed, which is in 'dataset/data_random.py'.


    # 在slice平面内随机选取48张slice
    start_slice = random.randint(0, ct_array.shape[0] - size)
    end_slice = start_slice + size - 1

    ct_array = ct_array[start_slice:end_slice + 1, :, :]
    seg_array = seg_array[start_slice:end_slice + 1, :, :]

@zz10001
Copy link

zz10001 commented Jul 24, 2019

Hi,@mitiandi,@ahmadmubashir
Is this program start with using data_prepare/get_random_data.py to get pre-process train data and dataset/data_random.py to extract 48 continuous slices,then i can use python train_ds.py directly to start train,looking forward to your reply!
Best,
Ming

@mitiandi
Copy link
Author

mitiandi commented Jul 24, 2019 via email

@zz10001
Copy link

zz10001 commented Jul 24, 2019

In the truth, i have forgot the details because there are a long time since then. But it seems right. what you need to do is just to change the data path to yours. Good luck~

Thanks for your kind help,may you have a good day!

@Oct6ber
Copy link

Oct6ber commented Aug 1, 2020

I randomly split the train dataset (131 cases) to two no-overlap subset, which are train set (105 cases) and test (26 cases) set. When I finshed train the network and test it on the test set (26 cases). I obtained a result that dice per case is 0.932. It is lower than your result (0.957).
Most importantly, I found that the dice coefficient on volume-54.nii is very poor. (0.18). Then I visualized the segmentation result of volume-54.nii and compared it to its ground truth. And then I found there were some dislocation about them.(about 10 slices). For example, segmentation result started with 62th slice,while ground truth started with 52th slice.

hi, have you found the reason Or how did you solve it

@zz10001
Copy link

zz10001 commented Aug 1, 2020

hi, have you found the reason Or how did you solve it

You can solve it by #6 (comment).

@Oct6ber
Copy link

Oct6ber commented Aug 1, 2020

hi, have you found the reason Or how did you solve it

You can solve it by #6 (comment).

Thank you very much

@Oct6ber
Copy link

Oct6ber commented Aug 2, 2020

hi, have you found the reason Or how did you solve it

You can solve it by #6 (comment).

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

@zz10001
Copy link

zz10001 commented Aug 2, 2020

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

sorry, I don't meet this problem, I just use 101-130 to Val, 1-100to train like this
image

@Oct6ber
Copy link

Oct6ber commented Aug 2, 2020

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

sorry, I don't meet this problem, I just use 101-130 to Val, 1-100to train like this
image

Thank you very much

@lcl180
Copy link

lcl180 commented May 11, 2021

The input of DialResUnet is 512512, but the output is 10241024. Shouldn't the divided input and output be the same size? Do you know why this is?

@lcl180
Copy link

lcl180 commented May 12, 2021

What is the code operation procedure of this project, can you share it? Thank you

@life-8079
Copy link

Hi, I used this method, but dice coefficient on volume-43 is still 0.67,I want to know if this is a normal value.

sorry, I don't meet this problem, I just use 101-130 to Val, 1-100to train like this image
JI}2`V) YLF1AL5} A1ZENI
hi,why my answers like these?The DICE and jacard were something wrong.

@zz10001
Copy link

zz10001 commented Mar 24, 2022

hi,why my answers like these?The DICE and jacard were something wrong.

Have you visualed the predict by ITK-SNAP or other viewer Software? Maybe you can see the predict first

@life-8079
Copy link

hi,why my answers like these?The DICE and jacard were something wrong.

Have you visualed the predict by ITK-SNAP or other viewer Software? Maybe you can see the predict first
9(T0PK_CPR2TOGLS18Z_E
Hi, the result is like this. The image‘s background is 1 and the liver is 0. Could you help me?

@zz10001
Copy link

zz10001 commented Mar 25, 2022

Hi, the result is like this. The image‘s background is 1 and the liver is 0. Could you help me?

It seems easy to exchange color for Liver and Tumor, You just need to Negate the Nii you predicted.

@zhouyizhuo
Copy link

Hi, @zz10001, @life-8079 ,
I'm wondering why the para.size is 48?
image
After I change the para.size to 32, and the para.slice_thickness from 1 to 4. I found the kiunet_org can't work.
I'd appreciate it if you can give some help!
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants