New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image extraction, segmentation, resampling, 2D, MRI cardiac images #360
Comments
Yes, see also #361
Yes, although other values than 1 are allowed (you'd then need to specify that value in the
Your third dimension is allowed to have size 1, it's just that it has to be present. All features can be extracted from 2D slices. However, take care when applying Wavelet or LoG filters, these assume true 3D input.
It's not a strange question. Quite the opposite, it's a very important, but difficult question. I don't have a direct answer, but you can try a large segmentation (containing all areas) or an average of multiple sub regions. |
Thank you very much again @JoostJM! I'll keep in mind the mask origin/direction/size issue. As for question 4, I will speak with my supervisor (the doctor in charge of the research project) on Monday and we'll try to figure out what makes more sense from a clinical point of view. I was thinking that by averaging all the sub-regions we could lose the gray level dependencies, and it would also be difficult because from what I understood the sub-regions often have different dimensions and shapes. Anyway, I'll keep you updated! Thanks again for your support! |
Hi @JoostJM! I have stored, in the same directory (All_Data_nrrd), two files which are: brain_image.nrrd and brain_label.nrrd. Both of them have size (512,512,1) and label is of course the mask. I checked the dimension through this. Then on PyCharm I just wrote the following code trying to emulate the "brain1" example:
and when I run it I get the following warning: Testcase "brain" not recognized! What am I doing wrong? Thank you very much! |
@tommydino93, that function finds the pyradiomics test cases. Try |
@JoostJM I managed to run everything with the brain1 example.....but what I need now for MY brain image and for my future cardiac images will be to load images (and masks) from one of my own directories and not from the pyradiomics data test cases. Which function could I use to do that? I thought that |
@tommydino93, The Easiest to use your own data is like this:
This sets the last two variables to the path of your data, which is similar to what |
Ok, thank you very much! And how about the settings (e.g. binWidth, interpolator, resampledPixelSpacing, voxelArrayShift)? Is there a rule of thumb to choose them? P.S: would it be a problem if I commented in a fork some of the code lines? I am trying to figure it out on my own just following the code (for instance in pyradiomics/radiomics/first order) but coming from Matlab (they spoiled me too much, I admit that) I am having trouble following even the single inputs and outputs for every functions (e.s. input datatype, output datatype, parameters datatype) |
@tommydino93, for that, check out the example settings in the repository. Besides that the only advice I can give you is to just see what works.
Sure no problem. |
Hi @JoostJM! I managed to run this code for my brain image, following the brain1 example:
Everything works fine, meaning that the program extracts all numerical first order features. What I wanted to ask you is:
|
This will store it as "image.nrrd" and "mask.nrrd" in your current working directory.
|
Hi again @JoostJM!
Thanks a lot in advance for your great work! |
Spacing in mm
interpolator specifies the method to use to calculate new pixel values (which order of funtion)
padDistance is the number of pixels to pad in the new image space (i.e. if you resample to (3, 3, 3) and pad 5, the size of the cropped region will increase by 2 * 5 * 3 = 30 mm in all directions, compared to bounding box size). As mentioned above, this is necessary for some filters (i.e.
Depends on how you do your extraction and how your dataset looks like. 2 main rules:
Additionally, resampling can also be used to focus on more coarse structures (when you use large resampled sizes), which of course describes a different texture than features focussing on fine structures.
Yes they are 0 because your ROI is flat. Look at the documentation of these features. Lambda means eigenvalue here, i.e. major axis is 4 * sqrt (largest eigenvalue). If I'm correct, these axis features are the lengths of the enclosing ellipsoid of the ROI. |
Ok thanks a lot!!
How do I know if they are varying?...I mean...Which is the easiest way to see ImagePositionPatient, ImageOrientationPatient, PixelSpacing, etc.? |
Run a simple extraction for your data (e.g. with just firstorder features). PyRadiomics includes the original spacing in the output by default ( |
Yeah, that's what I thought...but I'm not getting them. My code is:
and as output I get:
What am I missing? |
@tommydino93 You are missing that data because you are implementing the featureclasses directly, which is not advised. Use the |
@JoostJM Alright! I've done that, thanks. It appears that my pixel spacing is now (1.0, 1.0, 1.0) which is weird because in my original .dcm image it was different. I figured that maybe during the conversion from DICOM to nrrd something went wrong. I followed this guide for the conversion. Here are my two images opened in ImageJ: as you can see the .nrrd image (on the right) has lost the pixel spacing (highlighted) and is now in microns and not in mm. I also unticked the "Compress" option while doing the conversion, but the problem persists. Is there an alternative way for converting and preserving dimensions/pixel spacing? Thanks a lot again! |
You can try this, it's a script/package I built for conversion in Python. It will scan the folder and create NRRD (or NIFTII). If there is something wrong with the image, it will give you a warning what is happening. The spacing (1, 1, 1) is usually a sign that something went wrong during conversion. |
Alternatively, you can also try to see if Slicer found some issue by checking |
Thanks again @JoostJM ! I am having problems with PyCharm..maybe you can help me. I think it's a problem of configuration between PyCharm and GitHub. I typed this simple code:
and I get as error:
|
@tommydino93, You'll have to import the module before you can use it:
That being said, Nrrdify also has a commandline interface. To use it, it is most easy to install it.
This will print the help message detailing the usage (with optional configuration). Once it is installed, it should be usable throughout your computer. E.g.
This will convert all dicoms in the folder above into nrrd and store them in |
Wow thanks a lot! Could it be a problem just with the opening in ImageJ? |
@tommydino93, could be. Can you share an anonymised version of your DICOMs and NRRD? |
@JoostJM, I managed to anonymize only the DICOM image (but not the .nrrd) with DicomCleaner. I hope it is sufficient for you to try it out :) Here it's the MEGA link to the image: Thanks a lot again! |
Sorry, that link needs a key! This already has the key: https://mega.nz/#!gPhHRayB!JTen715bznaNqNxIzNKez_Wutb0M5RIZv1vS0_tg9BQ |
As far as I can see, it appears to be caused by ITK, as it also shows the incorrect spacing of (1, 1, 1) in ITKsnap. When I looked at the tags, spacing was defined as As to why imageJ gives you 512 microns, I do not know. Maybe a bug of some kind? the NRRD image says the spacing is 1, 1, 1, and does not encode the Field of View size (as it can easiliy be obtained by multiplying the spacing * the matrix size. On potential explanation is that it just does not know how to cope with the fact that there is no unit encoded in NRRD and it sort of assumes its microns (in which case 1 x 512 is indeed 512 microns). Here are also some pointers to other tools you could try. @fedorov is currently also busy with the DICOM to NIFTII/NRRD conversion problem, this is from a project he started at NAMIC project week in Boston last January: dcm2niix dicom2nifti dcmstack vtk-dicom/dicomtools/dicomtonifti FreeSurfer/mri_convert Plastimatch/convert mriconvert and mcverter |
Hi @JoostJM and @fedorov ! Anyway, I will try the "dicom2nifti" way, hoping to solve the issue. I'll keep you updated! Thanks again! |
@fedorov , I have been trying this simple code:
but no images appear in my output directory "Outut_Images_Nifti". Could this be because all my dicom images are uncompressed? @JoostJM Could this be the problem also for the other issues I had? I mean...Do I need to extract the images as COMPRESSED rather than UNCOMPRESSED? thanks a lot guys in advance! |
@tommydino93, as far as I know, uncompressed or compressed should not matter, if anything, uncompressed should be easier. I also used this program, for the example you sent, it is able to correctly read in the DICOM and you can re-save in NIFTII (which is also accepted by PyRadiomics). |
@JoostJM , @fedorov ! Thanks a lot for advising me Mango. Actually, I have great news: while doing some trials between Mango and ImageJ, I discovered that ImageJ itself allows to convert DICOM to nrrd and the pixel spacing is preserved!! We can just simply do: File>Save As>Nrrd And this works both for the image and for the mask. I don't know why I didn't think about it before. Anyway, before converting all images, I just tried to load one image and one mask simply typing:
and then I tried to run the shape feature program but it throws me this error:
From what I understand, it's a spacing problem, which leads us back to my initial question: "how could I set the resampling among all my images, since they have different spacing"? E.g. should I choose the average pixel spacing among all images? Or maybe the smallest one and apply it to all other images? Thanks a lot again, I really appreciate your help! |
255 is accepted, but you'll need to specify it in the |
Ok thanks a lot, I'll keep that in mind! Image: https://mega.nz/#!oLQVFYZZ!PNs-pOWlnXWM9Dau5kwrwYWnsdys4QZjka8uZxAh7ek In the ImageJ menu: Thanks in advance! |
You don't need to write any code to use
Most of the tools that @JoostJM referenced above in #360 (comment) provide command line interface. Using a GUI-based interface works when you need to convert just a few datasets, but it does not scale. If I were you, I would try several command line tools ( |
Thanks a lot @fedorov ! You're right, I should definitely become more agile with the command line interface. It's way faster. Anyhow, in the end I managed to convert the images from DICOM to nrrd with ImageJ, but, as I was explaining JoostJM in my previous comment, now I have a different problem with ITK which is unable to read my nrrd images, probably because of the space direction. Any suggestion? |
Can you open the NRRD file in the text editor, and copy paste the header here? |
Yes, sure. Here it is: |
I am not an NRRD expert, but it looks to me like an invalid header. Indeed, it looks inconsistent to have 3 components for the direction vectors, but only 2 vectors, and only 2 dimensions of the image. Is your dataset volumetric, or is it a single slice? If it is supposed to be volumetric, I would really recommend you go back and try other converters. It maye well be that ImageJ NRRD writing may have issues. You could also ask ImageJ support. If you want to just try something, you can maybe manually modify this line
to this:
Maybe it will fix the ITK issue ... |
The dataset is not volumetric. We just selected a single slice for each patient (the slice where the inflamed region was most evident)...so what should I expect for the third dimension in the header? |
you can also try this header, but it's just a guess for a hack...
|
@tommydino93, @fedorov, I agree that the header looks weird. Additionally, changing the directions to tuples of 2 will mean that PyRadiomics does not accept it (as it only accepts volumetric sets, even if only 1 slice). I agree with @fedorov that it is best to try to use some different tools, but if you want to manually edit the header, do so according to @fedorov's last post. |
@tommydino93 if you will go back to try other converters, you might also consider converting the 3d dataset. You do not need to have just one image slice if you want to extract features from just that slice. You can just define your label in one slice, but keep the image volumetric, so that if/when you decide to look at the analysis beyond single slice, you do not need to go back and deal with the data conversion again. |
Thank you both @JoostJM @fedorov !
I just wanted to understand what the various
Thanks in advance! |
HI @JoostJM , Furthermore, I was also trying to extract features from the LoG filtered image through the feature extractor. I tried to use this code from one of the examples:
but I get this error:
Shouldn't the function take 2 parameters as input? Why it's only expecting 1 positional argument? Thanks a lot in advance! |
Hi @JoostJM , I come again. Is there a way to check if the image and mask is under the same slice? If it's possible, how can I check it? Thanks a lot! |
This is done automatically by PyRadiomics. If you want to know more, check out this function (checkmask())
The difference here is how the bounding box is defined. I use 2 separate functions for that. Internally, the bounding box is handled as a tuple specifying the lower and upper bounds. However, the bounding box that is returned as part of the provenance info is just the lower bounds as the first 3 elements. The last 3 elements there specify the size of the bounding box. E.g. for the first dimension, x, the lower bound is 213 (first element in both), the upper bound is 278 (2nd element in the internal bounding box) and the size is 278 - 213 + 1 = 66 (4th element in the provenance-returned bounding box) |
I understand, thank you @JoostJM , I am very grateful! |
Hi everyone! With the help of @fedorov and @JoostJM, part of the resampling problem was solved in this 3D slicer forum question. Going back to PyRadiomics now: let's say I want a
My question is: |
I have answered myself. By looking at the helloResampling.py it is explained that |
Hi @JoostJM! "If you extract features in 3D you need to ensure that either the voxels are isotropic, or you take the different distances-to-neighbor into account (see my questions are:
Thanks again a lot for your support :) |
@tommydino93 No problem!
|
@JoostJM Thanks a lot for the immediate answer and detailed explanation!! So, suppose I wanted to go for option:
should I just set the three values of
Also, completely off-topic question: Thanks again :) |
@tommydino93 Yes, that enables resampling to isotropic voxels. On what size to use, I usually advise to take a compromise with deleting information in-plane and 'creating' it out-of plane. E.g. in the case of x,y = 1mm, z = 5 mm, I usually go for 2 or 3mm isotropic. Always keep in mind that larger voxels are not necessariliy bad, as small voxels are more sensitive to noise. On the other hand, if your lesions are very small, large voxels may result in ROIs with only few number of voxels, which is also unstable. as to your off-topic question. No gold standard. There is not even a standard (yet) on which to apply and how. Generally some are usually applied and most good radiomics packages do implement at least some. |
Thank you very much again! I will open a new quick issue since it's a completely different topic. |
The easiest way to do it in python is to load both image and mask in SimpleITK, then set the image as the reference volume for the mask, the save the mask as niftii. One question though, is your image 2d or 3d? I believe tiff will only save in 2d, so you'd need to specify which slice it is. |
Hi everyone!
Since I am about to begin a radiomic study ex novo, I just wanted to ask some questions so as to avoid annoying future problems. We will extract images and masks of interest next week from cardiac MRI with the intent of performing a future binary classification.
Should I necessarily extract images in .nrrd format or the .dcm (DICOM) format is fine and easy to convert into .nrrd?
Should the mask simply be an image with ones in the ROI and zeros outside of it?
Since we will probably work with single slices (we will select the most significant 2D slice for each patient), how should I set my third dimension since pyradiomics wants volumes as input? Moreover, which are the features that I will be able to extract? (meaning...are there some features that are only applicable in 3D?)
This is a strange question: one of the two ground truths of the final binary classification is myocarditis and this has the problem that it often doesn't appear as a single concentrated region in the image. As a matter of fact, since it is an inflammatory phenomenon, it often appears as sparse sub regions on the image. How should we deal with masks in this case? Should we just select the biggest region of inflammation? Should we somehow average the inflamed regions?
Thank you very much in advance,
Tommaso
The text was updated successfully, but these errors were encountered: