Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regarding preprocessing(for step1) #3

Closed
cjayanth95 opened this issue Jan 12, 2017 · 18 comments
Closed

Regarding preprocessing(for step1) #3

cjayanth95 opened this issue Jan 12, 2017 · 18 comments

Comments

@cjayanth95
Copy link

Hi,
I am having trouble understanding pre-processing step. I have dicom images of liver of size 512x512. How do I proceed further?Do I have to resize it to 388x388 and add symmetrical padding of size 92?

Any help would be appreciated.

Thanks.

@mohamed-ezz
Copy link
Collaborator

mohamed-ezz commented Jan 12, 2017 via email

@cjayanth95
Copy link
Author

cjayanth95 commented Jan 12, 2017

15970587_1181547601966293_1822897106_n

This is the image I get after HU windowing, histogram equalisation, resizing and padding.
here is the matlab code I used:

img= dicomread('image'); %pixel values range from -1024 to 1023
img(img>=400 | img<=-100) = -1024; %HU windowing(rescale slope = 1, rescale intercept = 0)
img = histeq(img);%histogram equalisation
colorimage= ind2rgb(img, gray(2048));
J = imresize(colorimage, [388 388]); % resize
paddedimage = padarray(J,[92 92],'symmetric');%symmetric padding
paddedimage = uint16(paddedimage);

Am I missing something?

@mohamed-ezz
Copy link
Collaborator

Please check the code in the notebook in : https://github.com/IBBM/Cascaded-FCN/blob/master/notebooks/cascaded_unet_inference.ipynb

you can find the preprocessing function named "step1_preprocess_img_slice".
We perform clipping not windowing, and no histogram equalization.

@cjayanth95
Copy link
Author

Please look at the prediction at the end. We are getting a blank image.

https://github.com/cjayanth95/liverseg/blob/master/custom_outs.ipynb

@mohamed-ezz
Copy link
Collaborator

Did you successfully run our notebook with outputs that make sense ?
I would do this first to make sure the setup is correct.

@cjayanth95
Copy link
Author

Actually this is the output we got when we used the data from http://www.ircad.fr/softwares/3Dircadb/3Dircadb1/ . We tried different patients data but our predictions were way off from Ground truth. We are getting blobs in place of the liver. These blobs change when we adjust the prediction probability, but we are unable to predict the whole liver correctly. Would you be able to tell where we can possibly go wrong?

@cjayanth95
Copy link
Author

We used BVLC Caffe but not the caffe that you mentioned( https://github.com/mohamed-ezz/caffe/tree/jonlong ). Will that make any difference?

@mohamed-ezz
Copy link
Collaborator

mohamed-ezz commented Jan 13, 2017

I'd recommend you just run the notebook "as is" and install the mentioned dependencies exactly. Otherwise many things can be wrong and it's difficult to help you.

There's a reason of course to use the mentioned branch ( https://github.com/mohamed-ezz/caffe/tree/jonlong ). The latest caffe version has a different crop layer that does not do center crops by default, so if you want to use it you have to specify an offset for each crop layer so that the operation is a center crop.

@cjayanth95
Copy link
Author

Thanks for the prompt replies. We were able to reproduce the results after specifying the offset for each crop layer.
Here is the notebook for the same.
https://github.com/cjayanth95/liverseg/blob/master/custom_outs_updated.ipynb

@cjayanth95
Copy link
Author

Hi,
Are step1 and step2 supposed to have the same prototxt files?
Also is pred2, the prediction for liver lesions? If so, can you take a look at pred2 in our output at the end of step2? Is the output right?
https://github.com/cjayanth95/liverseg/blob/master/for_me_legion.ipynb

Thanks.

@mohamed-ezz
Copy link
Collaborator

mohamed-ezz commented Jan 17, 2017 via email

@keesh0
Copy link

keesh0 commented Jun 19, 2019

Could you please provide an example of the following fix--

We were able to reproduce the results after specifying the offset for each crop layer.

@keesh0
Copy link

keesh0 commented Jun 22, 2019

Could you please provide an example of the following fix--

We were able to reproduce the results after specifying the offset for each crop layer.

Here is my updated crop layers (step 1 model) which appeared to work under Cafe 1.0.0 (AWS Python 3 configured)

layer {
name: "crop_d3c-d3cc"
type: "Crop"
bottom: "d3c" *current blob size (1, 512, 64, 64)
bottom: "u3a" *desired blob size (1, 512, 56, 56)
top: "d3cc"
crop_param {
axis: 2
offset: 4
offset: 4
}
}
layer {

type: "Crop"
bottom: "d2c" (1, 256, 136, 136)
bottom: "u2a" *desired blob size (1, 256, 104, 104)
top: "d2cc"
crop_param {
axis: 2
offset: 16
offset: 16
}
}
layer {
name: "crop_d1c-d1cc"
type: "Crop"
bottom: "d1c" (1, 128, 280, 280)
bottom: "u1a" *desired blob size (1, 128, 200, 200)
top: "d1cc"
crop_param {
axis: 2
offset: 40
offset: 40
}
}
layer {
name: "crop_d0c-d0cc"
type: "Crop"
bottom: "d0c" (1, 64, 568, 568)
bottom: "u0a" *desired blob size (1, 64, 392, 392)
top: "d0cc"
crop_param {
axis: 2
offset: 88
offset: 88
}
}

offset = (current blob size - desired blob size) / 2

I am down to just the following warning which I hope are related to training and not inference:
I0610 23:47:14.142359 4565 net.cpp:744] Ignoring source layer bn_d0b (batch normalization?)
I0610 23:47:14.169219 4565 net.cpp:744] Ignoring source layer loss (loss for training?))

@manjunathrv1985
Copy link

Hi,
I am having trouble understanding pre-processing step. I have dicom images of liver of size 512x512. How do I proceed further?Do I have to resize it to 388x388 and add symmetrical padding of size 92?

Any help would be appreciated.

Thanks.

15970587_1181547601966293_1822897106_n

This is the image I get after HU windowing, histogram equalisation, resizing and padding.
here is the matlab code I used:

img= dicomread('image'); %pixel values range from -1024 to 1023
img(img>=400 | img<=-100) = -1024; %HU windowing(rescale slope = 1, rescale intercept = 0)
img = histeq(img);%histogram equalisation
colorimage= ind2rgb(img, gray(2048));
J = imresize(colorimage, [388 388]); % resize
paddedimage = padarray(J,[92 92],'symmetric');%symmetric padding
paddedimage = uint16(paddedimage);

Am I missing something?

hello boss can I get a matlabcode for applying a hu windowing and histogram equalisation on a CT image
am working on liver segemntation and disease identification in a CT image using matlab am finding very much difficult in getting code or lead in this if u have any information related to this please share to manju.aps@gmail.com

@keesh0
Copy link

keesh0 commented May 11, 2021

You must following the preprocessing exactly as the model was trained.
Got very good results, even for different CT "phase" images.
See https://github.com/keesh0/cfcn_test_inference/blob/master/python/test_cascaded_unet_inference.py.
Feel free to adapt to your own data.
e.-

@manjunathrv1985
Copy link

manjunathrv1985 commented May 13, 2021 via email

@keesh0
Copy link

keesh0 commented May 13, 2021

Sorry, you would need to translate from Python to Matlab line by line.

@manjunathrv1985
Copy link

manjunathrv1985 commented May 15, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants