Skip to content
This repository has been archived by the owner on Mar 21, 2024. It is now read-only.

Inferenced segmentation.dcm cannot be imported to Eclipse #6

Closed
ant0nsc opened this issue Mar 30, 2022 · 3 comments · Fixed by #16
Closed

Inferenced segmentation.dcm cannot be imported to Eclipse #6

ant0nsc opened this issue Mar 30, 2022 · 3 comments · Fixed by #16
Assignees
Labels
bug Something isn't working usage blocker Bugs that block InnerEye users

Comments

@ant0nsc
Copy link
Contributor

ant0nsc commented Mar 30, 2022

Discussed in microsoft/InnerEye-DeepLearning#687

Originally posted by furtheraway March 5, 2022
Hi all,

I am walking through the model taining and inference with the example lung segamentation dataset https://github.com/microsoft/InnerEye-DeepLearning/blob/main/docs/sample_tasks.md

The model training has succeed on Azure ML.

Then I zip up the DICOM files of the second test subject into a zip file:
C:\Users\xxxx\Downloads\IE_lung_dataset\manifest-1557326747206\LCTSC\LCTSC-Test-S1-102\11-04-2003-NA-RTRCCTTHORAX8FHigh Adult-20444\0.000000-CT114545RespCT 3.0 B30f 50 Ex-81163

zip -r patS1102.zip 0.000000-CT114545RespCT\ \ 3.0\ \ B30f\ \ 50\ Ex-81163/

Then submit it with submit_for_inference.py:

python InnerEye/Scripts/submit_for_inference.py --image_file /home/xxxx/datasets/inference_dicom/patS1102.zip --model_id LungExample:1 --download_folder /home/xxxx/source/2022/download/ --use_dicom true

The inference run finished successfully and I got the segmentation.dcm file. But when I tried to import this segmentation.dcm file to Eclipse, I run into the following error:

image

I can import the original DICOM files for this subject without any issue,

@ant0nsc

AB#5453

@ant0nsc ant0nsc transferred this issue from microsoft/InnerEye-DeepLearning Mar 30, 2022
@ant0nsc
Copy link
Contributor Author

ant0nsc commented Apr 26, 2022

@peterhessey FYI

@peterhessey peterhessey added bug Something isn't working usage blocker Bugs that block InnerEye users labels May 19, 2022
@peterhessey
Copy link
Contributor

Additional details from previous e-mail chain:

"It looks like the problem is near the end of the file.

The dumps show the data twice, first as hex, then as ASCII. Each pair of lines is prefixed with the byte offset.

After the end of the file, the original file has extra 00 bytes, as shown in the snippets below. These
were cut and pasted from the hex dump files and had the byte offset corrected.

If you want to test it, you can use DICOM+ to read it in and see if it shows any errors.

Original file:
1232651  65  74  65  72  5e  fe  ff  0d  e0  00  00  00  00  fe  ff  dd  e0  00  00  00  00  **00  00  00  00  00  00  00  00  00  00  00  00  00  00  00  00**
          e   t   e   r   ^   ~ del  cr   ` nul nul nul nul   ~ del   ]   ` nul nul nul nul **nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul nul**

Fixed file:
1232677  65  74  65  72  5e  fe  ff  0d  e0  00  00  00  00  fe  ff  dd  e0  00  00  00  00
          e   t   e   r   ^   ~ del  cr   ` nul nul nul nul   ~ del   ]   ` nul nul nul nul

"

It was recommended that the data between the pairs of asterisks (**) should be removed (asterisks added manually).

@ant0nsc
Copy link
Contributor Author

ant0nsc commented Jun 8, 2022

Adding @furtheraway for visibility, who had originally posted the question

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working usage blocker Bugs that block InnerEye users
Projects
Status: Done
Development

Successfully merging a pull request may close this issue.

2 participants