Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ImageOrientationPatient is not read for multiframe input #372

Closed
fedorov opened this issue Feb 5, 2020 · 25 comments
Closed

ImageOrientationPatient is not read for multiframe input #372

fedorov opened this issue Feb 5, 2020 · 25 comments

Comments

@fedorov
Copy link

fedorov commented Feb 5, 2020

In the attached DICOM object, which is a legacy converted enhanced CT instance, ImageOrientationPatient is present in the SharedFunctionalGroupsSequence:

image

However, latest release of dcm2niix does not seem to recognize it:

$ dcm2niix legacy_ct.dcm
Chris Rorden's dcm2niiX version v1.0.20190902  (JP2:OpenJPEG) (JP-LS:CharLS) Clang8.1.0 (64-bit MacOS)
Found 1 DICOM file(s)
Unable to determine spatial orientation: 0020,0037 missing (Type 1 attribute: not a valid DICOM) Series 2
Unable to determine spatial orientation: 0020,0037 missing (Type 1 attribute: not a valid DICOM) Series 2
Unable to determine spatial orientation: 0020,0037 missing (Type 1 attribute: not a valid DICOM) Series 2
Warning: Unable to determine slice direction: please check whether slices are flipped
Warning: Bogus spatial matrix (perhaps non-spatial image): inspect spatial orientation
Convert 1 DICOM as output/output_ABD_PEL_C_ORAL_IV_19970310113343_2b (512x512x95x1)
Conversion required 0.405778 seconds (0.372133 for core code).

Is this a bug?

A related question. Is dcm2niix expected to sort individual frames within a multiframe image, the same way it is sorting individual slices for non-multiframe series? It does not sort frames for the attached sample, but I do not know if this is because it cannot find ImageOrientationPatient, or it does not have that functionality at all.

cc: @afshinmessiah @hackermd

@neurolabusc
Copy link
Collaborator

Do you have a copy of the same image before is got touched by GEIIS? I suspect one key issue is that GEIIS would take DICOM-compliant images as input, and insert a thumbnail into a SQ using public tags in a way that was not DICOM compliant (e.g. transfer syntax). dcm2niix have several different kludges to handle this, but due to various bugs over the development of GEIIS, and limited sampling, it is really hard to handle these. Many images created with GEIIS break are not formally DICOM compliant, and even those that my software handles will fail with other tools. You really want to get images untouched by this tool. I know GE now has more modern tools that do support the DICOM format. If you have an old GEIiS system, I would insist GE honor their DICOM conformance statement.

I also note these images are missing the type 1 tag 0020,9157, which is used to determine slice order. My sense is that this reflects a limitation in these images. If you can get a copy of the images as acquired you should be OK.

@fedorov
Copy link
Author

fedorov commented Feb 5, 2020

Do you have a copy of the same image before is got touched by GEIIS?

@neurolabusc I don't know what you mean by "before is got touched by GEIIS", please forgive me my ignorance. Here is the original dataset, before conversion to legacy enhanced multiframe (the CT series linked is from the TCIA TCGA-OV collection, please see all the acknowledgments and usage policy here: https://wiki.cancerimagingarchive.net/display/Public/TCGA-OV).

I do not have troubles converting the original series, and I can definitely say that it did NOT "get touched by GEIIS" before being converted into a legacy enhanced multiframe CT instance - it was only touched by the multiframe converter. Can you give a bit more information about what aspect of the dataset is problematic for dcm2niix? There should be no new private attributes inserted into the MF object, because it is constructed purely based on the content of the original series.

I also note these images are missing the type 1 tag 0020,9157, which is used to determine slice order.

It might well be. I will check and we may need to fix this. But I am not sure you can/should rely on that tag to establish slice order for the purposes of volume reconstruction and slice ordering in the output NIfTI.

Can you confirm, is dcm2niix going to sort individual frames or not? Will it always rely on DimensionIndexValues to establish geometric order? My goal is not to convert the original series, but the converted MF object.

@neurolabusc
Copy link
Collaborator

The classic DICOMs look fine. The enhanced DICOM is missing 0020,9157. This is a limitation of the enhanced DICOM image and not dcm2niix. I do not know what tool was used to convert these, you might try converting the images before they were touched by the GEIIS PACS. Many tools (including dcm2niix) have kludges to handle borked GEIIS data, and it is possible that the converter tool suffers from unintended consequences relating to those kludges.

dcm2niix is validated using enhanced datasets from Philips, Siemens and pixelmed. In theory, one could use slice position to sort enhanced data. However, this would only work for 3D data. For example, Philips 4D MRI data are saved in a non-sequential order (potentially reflecting the order images come from a parallel reconstructor). dcm2niix assumes that enhanced DICOM data is valid DICOM data. Over the years, I have added many kludges to handle common errors in interpreting the DICOM standard. However, these kludges reduce the maintainability of the code and can have unintended consequences. Furthermore, such non-compliant images may cause issues with future software and therefore are not of archival quality. Feel free to submit a pull request to handle these images, but I personally would suggest working with the developers of the converter to develop DICOM-compliant images is the best course of action. The tag 0020,9157 is required for enhanced images, therefore the sample images are technically not valid DICOM.

@fedorov
Copy link
Author

fedorov commented Feb 5, 2020

@neurolabusc thank you for the response, I appreciate your attention.

you might try converting the images before they were touched by the GEIIS PACS

This is not an option. I don't have anything beyond the TCIA hosted datasets, one example of which I shared.

In theory, one could use slice position to sort enhanced data. However, this would only work for 3D data.

Yes, I completely agree. But the same argument applies to non-MF series, which can contain 4+ dimensions, and those are sorted by dcm2niix. I think it would be consistent to have the same sorting behavior of the converter for (standard-compliant, of course) MF as for non-MF data.

The tag 0020,9157 is required for enhanced images, therefore the sample images are technically not valid DICOM.

Yes, I understand, and I will work with the developers of the converter to fix that problem. What I am saying is that it is not necessarily going to resolve the ordering issue. I don't think the purpose of that tag is necessarily to communicate geometric ordering needed for volume reconstruction, but to convey some ordering based on the preference of the series creator (e.g., to support order of presentation of the individual frames, or to support per-frame access to pixel data in the order as needed by consumer).

I will follow up on this once I have the dataset that is valid.

@neurolabusc
Copy link
Collaborator

Tag 0020,9157 is type 1 for enhanced DICOM. Whatever the intention of the tag, in practice the Philips usage for fMRI and DWI requires using this for ordering. As the first major vendor to support enhanced DICOM, they have become the defacto standard. Feel free to submit a pull request to extend dcm2niix to sort enhanced data when this tag is missing. This tag did not exist in classic images, so it was not used for sorting classic data.

@fedorov
Copy link
Author

fedorov commented Feb 5, 2020

Tag 0020,9157 is type 1 for enhanced DICOM [...] Feel free to submit a pull request to extend dcm2niix to sort enhanced data when this tag is missing.

To be clear, I am not at all questioning the fact that this tag is required, and I am not suggesting that dcm2niix should handle datasets that do not have this tag. I apologize I did not express myself clearly in this regard earlier.

@neurolabusc
Copy link
Collaborator

Understood. To be honest, it is possible that dcm2niix already does this if provided with valid enhanced DICOMs. Regardless, I am going to close this issue. Once you have a valid dataset, feel free to generate a new issue that can be labelled as a feature request. At the moment, I have a lot on my plate supporting the major vendors, so a pull request is likely to get dealt with much quicker than a feature request that requires me to invest a lot of time. I am happy to help, but have a lot of teaching, research and service duties, so you need to be realistic regarding expectations.

@dclunie
Copy link

dclunie commented Feb 6, 2020

Chris, I don't think you should close this issue so hurriedly.

I suggest that your assumption that you can depend on the presence of Dimension Index Values (0020,9157) in Enhanced MF objects, be they "true" or "legacy converted" is perhaps misplaced.

In particular, for the "legacy converted" multi-frame objects, the dimensions are OPTIONAL.

The enhanced family objects may indeed contain a description of "dimensions" specified a priori by the creator, and we (DICOM WG 16) intended them to be used for such things as describing 3D and 4D volumes, where there one can clearly specify dimensions of space and/or time or some other such thing (e.g., diffusion B value).

However, initially (Sup 49) the Multi-frame Dimension Module (where the target descriptions of the Dimension Index Values (0020,9157) are described) allowed the Dimension Index Sequence to be empty (was Type 2) and the Dimension Index Values could be omitted if there were no dimension defined.

This was later changed to make the whole construct mandatory (more often anyway) for the "true" multi-frame objects, but still does not mean that any dimensions that are specified will be of space and time, or meaningful to you or any other recipient.

Bottom line is that if dimension information is present, and if you can confirm that it is meaningful for the receiving application (or in the case of dcm2niix, that they can be matched to appropriate output file encoding of dimensions) then you can use them. Note that the dimensions of space may or may not point to Image Position (Patient); other alternatives like Stack ID + In-stack Position may be used, etc. Likewise time may not be in Frame Acquisition DateTime, but a cardiac cycle relative time or similar.

But if they are absent, or point to things that you do not want or do not understand (e.g., private data elements, as they may), then you should fall back to doing exactly the same as you would do for the old single slice per file DICOM images, rely on the Image Position (Patient) and Image Orientation (Patient) values +/- whatever other dimensions you can extract from timing attributes like Acquisition Date and Time or whatever.

The only difference when converting multi-frame, whether they be "true" or "enhanced", is that these will be nested within either the Shared or Per-Frame Functional Group Sequences, as you know.

In short, since you undoubtedly already have a mechanism for extracting from Image Position (Patient) and Image Orientation (Patient), your tool would be considerably more useful if you applied to multi-frame enhanced images, true or legacy, when the pattern of dimensions doesn't match one of the patterns that you recognize (or was absent).

Just because you happen to have encountered some images that happen to have been encoded one way by one vendor does not it make their way a "de facto" standard, or that anyone else implementing the standard will follow the pattern that vendor happens to have used.

This is not say that Andrey's legacy converted MF image sample is a good one; it isn't, and is totally horrid in many respects (both because of errors in the single frame that are propagated as well as multi-frame-specific errors), but the lack of dimensions should not make it unusable given the presence of Image Position (Patient) and Image Orientation (Patient) in the right places.

Just for fun I converted the same source images using one of my tools, did not create any dimensions (though did produce a stack), and produced an object that dciodvfy is happy with (apart from a weird laterality related issue with the anatomy being ovary). See:

http://www.dropbox.com/s/7v2htzcy5gfqq39/legacy_ct_pixelmed_20200205.dcm.bz2

What does your tool do with this one, which is a "valid dataset" per your request?

Hopefully it cope with it, and not just because the frames happen to be in a reasonable order.

If you can deal with the object I supplied, then you should also be able to deal with the one Andrey supplied, despite its other irrelevant problems), which is why I suggest you reopen this issue rather than starting a new one and losing the discussion and the context.

David Clunie (dclunie@dclunie.com)

@fedorov
Copy link
Author

fedorov commented Feb 6, 2020

What does your tool do with this one, which is a "valid dataset" per your request?

@dclunie thanks for the comment. I tested your sample, and confirm dcm2niix works fine (no errors on the console, and the output NIfTI lines up with the volume loaded from the non-MF series).

It seems that the frames in the MF object you produced are sorted by ImagePositionPatient - @dclunie can you disable sorting of the frames in your converter, so we can test that aspect of dcm2niix?

This is not say that Andrey's legacy converted MF image sample is a good one; it isn't, and is totally horrid in many respects (both because of errors in the single frame that are propagated as well as multi-frame-specific errors)

Yes, that is expected. The tool used to generate that sample is not finished/validated.

@neurolabusc
Copy link
Collaborator

I have added preliminary support for these images. Since the DICOM standard does not require image instance to be informative or even unique, I use the slice location (slice direction from the cross product of 0020,0037; slice position is dot product of slice direction and slice position 0020,0032). This approach will not scale to 4D data or to scout/localizer scans where 0020,0037 differs across slices. I suggest we revisit this as @fedorov's tool is finished/validated. None of the popular open-source DICOM visualization tools I tested handled these images appropriately, so I would strongly suggest using the finished/validated converter to enhance those tools before distributing the converter. I feel enhanced DICOM was a missed opportunity to re-factor the standard in the same way that projects like the OpenGL Core specification provided clear guidelines for a high-performance API while preserving backward compatibility. The better compatibility of my latest commit incur what I think is an unavoidable performance penalty for conversion of enhanced DICOM from the major vendors. This will give big data users further incentive to divest.

@fedorov
Copy link
Author

fedorov commented Feb 7, 2020

Thank you @neurolabusc. The tool used for conversion is not my tool, I am just helping with evaluation and improving it. Yes we should definitely revisit this topic as that tool becomes more robust. Hopefully we will be able to share more details about that tool very soon.

@neurolabusc
Copy link
Collaborator

Great. I hope this new enhanced DICOM standard will have the same impact as the classic DICOM standard. The conversion tool could play a tremendous role in converting classic DICOM to the modern format. I do think it is important to put the time into getting it right. I suggest it is worthwhile to organize the frames when creating the enhanced images: this would be very useful in encouraging other tools to adopt this modern format (and keeping sequential images sequential will improve image loading performance). This talk is compelling, with enhanced DICOM being the 15th competing standard. At the moment, these enhanced DICOMs are incompatible with many open source tools. If we can reduce the complexity for supporting this, we can avoid frustrating developers and users. I think many users expect that any DICOM viewer will handle any DICOM image, and this will not be the case during the transition to enhanced DICOM. Carefully validated and tested conversion tools can play an important role in making this transition as brief as possible. I commend the important work of your colleagues and am happy to help. Is this planned to be an open source tool?

@dclunie
Copy link

dclunie commented Feb 7, 2020

@fedorov, here is the same image with the frames encoded in a random order, as requested, which should test the recipient's ability to sort (it has no Dimensions, but it does have Stack information:

https://www.dropbox.com/s/roc9hm4cgs9rpde/legacy_ct_pixelmed_randomframeorder_20200207.dcm.bz2

and this one also has frames in a random order but no Dimension or Stack information encoded:

https://www.dropbox.com/s/xc71a2p04ef6oh3/legacy_ct_pixelmed_randomframeorderandnostack_20200207.dcm.bz2

@drmclem
Copy link

drmclem commented Feb 7, 2020

Hi - Just a personal opinion this one - the problem we have is that research applications haven't embraced the issue that DICOM is actually best represented as a database (referenced in Dave Clunies talk that you linked). So rather than interact with the database through the defined network communication tools, the approach has been dump the database to disk and then try and interact with it - which is always going to be clumsy - I don't think I'm over stating it to say the dicom files were never meant to be used in earnest. But - wrting a DICOM node is not easy, and so we have the 15 competing formats all of which throw information away to do their job.

@neurolabusc
Copy link
Collaborator

@dclunie these sample datasets are really useful. Would you be able to provide enhanced conversions of these CT images. This would be a good reference dataset to validate handling of gantry tilt and variable distances between slices. Also, I assume you are happy to have the resulting files permanently shared to help others (e.g. sometimes people share temporary files on DropBox, whereas I would like to have some permanent validation repositories).

@neurolabusc
Copy link
Collaborator

@drmclem I absolutely agree. One reason we have different formats is that they fill different niches. It is also important to recognize that formats like DICOM have changed to handle modern computers and datasets. Therefore, while some contemporary tools may recent changes to the formats, the changes were intelligently designed to alleviate problems.

@dclunie
Copy link

dclunie commented Feb 7, 2020

@neurolabusc, I have done a conversion of the gantry tilt and variable distance image sets as requested (these have sensible encoded frame pixel data rather than random); see:

http://www.dropbox.com/s/assd4wvtyk8kez1/dcm_qa_ct_convertedToLegacyEnhMFDICOMWithPixelMed_20200207.tar.bz2

I did modify my usual code to split one pair of series based on different ConvolutionKernel, which were otherwise being merged together (resulting in two traversals of the same spatial tack in one instance).

The thinner posterior fossa slices end up in a separate multi-frame instance, since currently my tool partitions on different spacing.

I didn't go to the effort of comparing these with your reference output files.

@fedorov, see this build with an updated com.pixelmed.dicom.MultiFrameImageFactory, if you are need to run it yourself:

http://www.dclunie.com/pixelmed/software/20200207_experimental/index.html

@neurolabusc
Copy link
Collaborator

@dclunie, thanks!

@neurolabusc
Copy link
Collaborator

@fedorov, @drmclem and @dclunie can any of you suggest an open source viewer that handles enhanced DICOM robustly? Seeing how others have solved this would be a great help, in particular if I can peak at their code.

@dclunie
Copy link

dclunie commented Feb 8, 2020

Neither 3DSlicer nor Osirix seem to handle the "random frame order" CT I made for you.

Horos doesn't load them at all (it does have true enhanced MF support, but doesn't seem to recognize the legacy converted SOP Classes for import).

That said, you (@neurolabusc) shouldn't need to peek at anybody's code to do this ... you already have the necessary logic if you know how to sort single slice files in a series ... you just need to fetch the Image Position (Patient) values per frame from within the Items of the PerFrame Functional Groups Sequence, and the Image Orientation (Patient) from within the Shared Functional Groups Sequence (assuming it has the same value and has been factored out there), rather than the top level data set as you would normally do.

@dclunie
Copy link

dclunie commented Feb 8, 2020

BTW. I was going to just "fix" Horos to load these, which looks pretty easy (update "horos/DCM Framework/DCMAbstractSyntaxUID.m" with the additional SOP class UIDs), but then I tested Horos first by making a "true" enhanced SOP Class instance by just changing the SOP Class UID of the random order test file and loading that - it didn't sort the frames correctly and behaves just like Osirix.

In case you need it, the test image with the random frame order and different SOP Class UID is at:

http://www.dropbox.com/s/if03y6iwy50r7tq/ct_pixelmed_randomframeorderandnostack_20200208.dcm.bz2

I did NOT change the SOP Instance UID, and it is not a completely valid instance for the SOP Class (dciodvfy and com.pixelmed.validate.DicomInstanceValidator will report lots of missing stuff, but which is not germane to this test).

@dclunie
Copy link

dclunie commented Feb 8, 2020

Actually, I spoke too soon when I said Osirix don't sort correctly - I had forgotten that this is user controlled; under "2D Viewer > Sort" By if one selects one of the Slice Position options rather than Instance Number, it sorts fine in the 2D Viewer, as it does in the 3D Viewers.

Unfortunately this is NOT true of Horos, which fails to sort correctly in the 2D viewer or 3D viewers regardless of the sort order setting :(

I updated/submitted defect reports on Horos:

http://github.com/horosproject/horos/issues/542
http://github.com/horosproject/horos/issues/543

@fedorov
Copy link
Author

fedorov commented Feb 10, 2020

can any of you suggest an open source viewer that handles enhanced DICOM robustly?

@neurolabusc I am not aware of one, but I am not surprised, considering difficulties to even finding sample datasets. But at least for Slicer, we already have dcm2niix integrated as a plugin, so we should be able to provide this functionality to the users quickly. I think it's a great opportunity to implement this support properly, considering we have David Clunie's help. I think having support for regularly sampled 3d volumes is the start, and will already be very useful.

I appreciate your updates to dcm2niix. I will get back to you as we make progress with this.

@neurolabusc
Copy link
Collaborator

@dclunie latest revision works with your samples, including CT scans with gantry tilt. I have not yet put in variable slice thickness as your tool exports these as separate files. Would it be possible for you to also show conversion of 4D MRI data - here are examples from GE/Siemens (though note Siemens XA is already enhanced)? Likewise, here is Philips data. What happens to the proprietary Siemens CSA data - is this retained for each slice (e.g. so we can infer diffusion gradients, phase encoding polarity, slice timing)? Thanks again for useful samples.

@fedorov
Copy link
Author

fedorov commented Mar 27, 2020

To make this thread more complete, we used highdicom for creating the legacy converted instances mentioned at the top of the issue. It was not publicly released at the time the discussion started, but it is now, under MIT license.

yarikoptic added a commit to neurodebian/dcm2niix that referenced this issue May 6, 2020
* tag 'v1.0.20200331': (52 commits)
  Update submodules
  Update dcm_qa submodule.
  UIH 3D sequence quirk
  New release, EstimatedTotalReadoutTime/EstimatedEffectiveEchoSpacing (rordenlab#377)
  Philips TotalReadoutTime (rordenlab#377)
  Cleanup
  Experimental Canon DICOM support (rordenlab#388)
  Experimental solution for issue 384 (rordenlab#384)
  Detect catastrophic anonymization (rordenlab#383)
  Only report "multiple inversion times" if 0018,9079 values differ (e.g. Bangalore data in https://github.com/neurolabusc/dcm_qa_philips)
  Consistent echo naming (rordenlab#381)
  Philips partial Fourier (rordenlab#377)
  Support InversionTImes (0018,9079) tag (rordenlab#380)
  Philips effective echo spacing formula ambiguous (rordenlab#377)
  TR for Philips 3D EPI (rordenlab#369)
  Citation (rordenlab#102)
  GE PET with variable slice intensity (rordenlab#374)
  Estimate Philips EffectiveEchoSpacing (nipreps/sdcflows#5)
  GE slice interpolation (rordenlab#373)
  3D EPI TR (rordenlab#369) 3D phase (rordenlab#371) Enhanced ordering (rordenlab#372 (comment))
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants