Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

low-resolution T2w data mislabeled as T1w causes workflow to fail #58

Closed
wtriplett opened this issue Mar 27, 2016 · 4 comments
Closed
Assignees
Labels

Comments

@wtriplett
Copy link

This seems like a low priority issue because there is enough information in the processing outputs to for the user to learn the cause and fix it, but it might be a nice feature for the workflow to continue processing the properly labeled files through to completion, if it is an easy fix.


Running mriqc on dataset where (in this case) low-res T2-weighted images were incorrectly labeled as anatomical T-weighted causes workflow to fail.

mriqc (bc34246) was run on the reduced dataset shown below as:

mriqc -i /home1/03872/wtriplet/openfmri/HISTORICAL_DATA_TO_BIDS/999_AllUncompressed/ds000031/ds031_ses-011-010-015_only -w /scratch/03872/wtriplet/ds031_reduced.qcpwrk -o /scratch/03872/wtriplet/ds031_reduced.qcpout --skip-functional --nthreads 1

screen shot 2016-03-27 at 7 12 26 am

However, runs 001 - 007 for ses-001 were low resolution T2w mislabeled as T1w:

screen shot 2016-03-27 at 7 10 52 am

Resulting in this output to mriqc's -w and -o directories. The resulting .csv file shows that measures are computed for the good T1s (ses-010 and ses-015).

screen shot 2016-03-27 at 7 17 43 am

The contents of one of the pickles:

File: crash-20160327-075755-wtriplet-skullstrip.a2.pklz
Node: aMRIQC.SkullStripWorkflow.skullstrip.a2
Working directory: /scratch/03872/wtriplet/ds031_reduced.qcpwrk/aMRIQC/SkullStripWorkflow/_data_sub-01.ses-001.T1w_run-003...home1..03872..wtriplet..openfmri..HISTORICAL_DATA_TO_BIDS..999_AllUncompressed..ds000031..ds031_ses-011-010-015_only..sub-01..ses-001..anat..sub-01_ses-001_run-003_T1w.nii.gz/skullstrip


Node inputs:

args = <undefined>
environ = {}
ignore_exception = False
in_file = /scratch/03872/wtriplet/ds031_reduced.qcpwrk/aMRIQC/SkullStripWorkflow/_data_sub-01.ses-001.T1w_run-003...home1..03872..wtriplet..openfmri..HISTORICAL_DATA_TO_BIDS..999_AllUncompressed..ds000031..ds031_ses-011-010-015_only..sub-01..ses-001..anat..sub-01_ses-001_run-003_T1w.nii.gz/skullstrip/sub-01_ses-001_run-003_T1w_resample_corrected.nii.gz
out_file = <undefined>
outputtype = NIFTI_GZ
terminal_output = stream



Traceback:
Traceback (most recent call last):
  File "/home1/03872/wtriplet/software/nipype/nipype/pipeline/plugins/linear.py", line 39, in run
    node.run(updatehash=updatehash)
  File "/home1/03872/wtriplet/software/nipype/nipype/pipeline/engine/nodes.py", line 392, in run
    self._run_interface()
  File "/home1/03872/wtriplet/software/nipype/nipype/pipeline/engine/nodes.py", line 502, in _run_interface
    self._result = self._run_command(execute)
  File "/home1/03872/wtriplet/software/nipype/nipype/pipeline/engine/nodes.py", line 628, in _run_command
    result = self._interface.run()
  File "/home1/03872/wtriplet/software/nipype/nipype/interfaces/base.py", line 1032, in run
    runtime = self._run_wrapper(runtime)
  File "/home1/03872/wtriplet/software/nipype/nipype/interfaces/base.py", line 1460, in _run_wrapper
    runtime = self._run_interface(runtime)
  File "/home1/03872/wtriplet/software/nipype/nipype/interfaces/afni/base.py", line 127, in _run_interface
    return super(AFNICommandBase, self)._run_interface(runtime)
  File "/home1/03872/wtriplet/software/nipype/nipype/interfaces/base.py", line 1494, in _run_interface
    self.raise_exception(runtime)
  File "/home1/03872/wtriplet/software/nipype/nipype/interfaces/base.py", line 1418, in raise_exception
    raise RuntimeError(message)
RuntimeError: Command:
3dSkullStrip -input /scratch/03872/wtriplet/ds031_reduced.qcpwrk/aMRIQC/SkullStripWorkflow/_data_sub-01.ses-001.T1w_run-003...home1..03872..wtriplet..openfmri..HISTORICAL_DATA_TO_BIDS..999_AllUncompressed..ds000031..ds031_ses-011-010-015_only..sub-01..ses-001..anat..sub-01_ses-001_run-003_T1w.nii.gz/skullstrip/sub-01_ses-001_run-003_T1w_resample_corrected.nii.gz -prefix sub-01_ses-001_run-003_T1w_resample_corrected_skullstrip.nii.gz
Standard output:

Standard error:
** ERROR: Too few slices (< 16) in at least one dimension.
**ERROR: normalization fails!?
Return code: 1
Interface SkullStrip failed to run.

and I attach the compressed execution log for the whole run:

ds031_ses-011-010-015_only.execout.gz

which starts off like this:

Launcher: Task 0 running job 1 on nid00183 (mriqc -i /home1/03872/wtriplet/openfmri/HISTORICAL_DATA_TO_BIDS/999_AllUncompressed/ds000031/ds031_ses-011-010-015_only -w /scratch/03872/wtriplet/ds031_reduced.qcpwrk -o /scratch/03872/wtriplet/ds031_reduced.qcpout --skip-functional --nthreads 1)
160327-07:57:37,34 interface WARNING:
     AFNI is outdated, detected version AFNI_16.0.00 and AFNI_16.0.18 is available.
160327-07:57:37,275 interface WARNING:
     AFNI is outdated, detected version AFNI_16.0.00 and AFNI_16.0.18 is available.
160327-07:57:37,511 interface WARNING:
     AFNI is outdated, detected version AFNI_16.0.00 and AFNI_16.0.18 is available.
160327-07:57:37,753 interface WARNING:
     AFNI is outdated, detected version AFNI_16.0.00 and AFNI_16.0.18 is available.
[ etc. and so forth... ]

and ends like this:

[ ... ]
160327-08:27:26,797 workflow ERROR:
     could not run node: aMRIQC.SkullStripWorkflow.skullstrip.a6
160327-08:27:26,797 workflow INFO:
     crashfile: /scratch/03872/wtriplet/ds031_reduced.qcpwrk/log/crash-20160327-080833-wtriplet-skullstrip.a6.pklz
160327-08:27:26,797 workflow ERROR:
     could not run node: aMRIQC.SkullStripWorkflow.skullstrip.a0
160327-08:27:26,797 workflow INFO:
     crashfile: /scratch/03872/wtriplet/ds031_reduced.qcpwrk/log/crash-20160327-081126-wtriplet-skullstrip.a0.pklz
160327-08:27:26,797 workflow INFO:
     ***********************************
Traceback (most recent call last):
  File "/work/03872/wtriplet/lonestar/anaconda/bin/mriqc", line 9, in <module>
    load_entry_point('mriqc', 'console_scripts', 'mriqc')()
  File "/home1/03872/wtriplet/software/mriqc/mriqc/run_mriqc.py", line 114, in main
    awf.run(**plugin_settings)
  File "/home1/03872/wtriplet/software/nipype/nipype/pipeline/engine/workflows.py", line 595, in run
    runner.run(execgraph, updatehash=updatehash, config=self.config)
  File "/home1/03872/wtriplet/software/nipype/nipype/pipeline/plugins/linear.py", line 57, in run
    report_nodes_not_run(notrun)
  File "/home1/03872/wtriplet/software/nipype/nipype/pipeline/plugins/base.py", line 93, in report_nodes_not_run
    raise RuntimeError(('Workflow did not execute cleanly. '
RuntimeError: Workflow did not execute cleanly. Check log for details
Launcher: Job 1 completed in 1914 seconds.
Launcher: Task 0 done. Exiting.
Launcher: Done. Job exited without errors

let me know if I can provide and additional details!

@oesteban
Copy link
Member

oesteban commented May 6, 2016

I guess there is not much to do about it. The only thing I can think of is that we create a report for these cases, indicating that the subject was not processed and adding the mosaic view to let the user see if it was a T2w instead of T1w.

Another idea would be to make a failback interface, but this would require some changes in nipype. When skullstrip fails, we would try FSL bet, and if it fails we would try skullstrip again with settings for T2w images (I don't know if this is possible, if not, the same but using bet).

What do you think, @chrisfilo ? I'm trying to look at this from the perspective of robustness.

@oesteban
Copy link
Member

oesteban commented May 6, 2016

Actually, the mosaic of failed cases is probably more interesting than correct cases.

@oesteban oesteban self-assigned this May 6, 2016
@chrisgorgo
Copy link
Collaborator

I agree that providing mosaic for the failed cases is the best we can do
for mislabeled data. We should also strive for trying to process each run
independently so if one fails information from other would still be
available.

On Fri, May 6, 2016 at 4:12 PM, Oscar Esteban notifications@github.com
wrote:

Actually, the mosaic of failed cases is probably more interesting than
correct cases.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
#58 (comment)

@oesteban
Copy link
Member

oesteban commented May 6, 2016

Ok, the runs are independent in terms of processing. I can't tell when reporting. I'll spend a while this weekend for this issues (I will be including them in #98).

@oesteban oesteban closed this as completed May 6, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants