Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP][ENH] Confound model enhancement #1487

Closed
wants to merge 0 commits into from
Closed

Conversation

rciric
Copy link
Contributor

@rciric rciric commented Jan 28, 2019

Changes proposed in this pull request

Depends on:

Changes

  • Begin tracking confound metadata. Currently, only CompCor metadata is saved in this PR (following RC1, at ~desc-[a|t]compcor_decomposition.json).
  • Following Anatomical CompCor enhancement #1458, return all CompCor components in the confound matrix. Together with the CompCor metadata, this will support alternative model selection criteria: for instance, the 50 percent variance threshold supported by Muschelli and colleagues.
  • Include a plot of the cumulative variance explained by each successive component in a/tCompCor decompositions, and report the criterion that satisfies 50 percent variance explained.
  • Include a plotted matrix of correlations among confound time series to assist model selection.
  • Expand certain elements of the confound model (realignment parameters, mean WM/CSF/global signals) with temporal derivative and quadratic terms.
  • Include motion outliers estimated on the basis of FD and DVARS thresholds, following Power et al., 2014.

Comments/discussion

  • Given that CI will fail (due to missing dependencies), I am happy to upload the output from running the modified pipeline on example dataset ds005 (or any other dataset) if it would be helpful to the review process or otherwise.
  • Following Update Confounds #1301, CompCor metadata could potentially be merged with confound metadata from other sources and accordingly placed into ~desc-confounds_regressors.json.
  • Spike regressors (motion outliers) currently use the hard thresholds established by Power et al., 2014. This is without consideration to sampling rate/TR.
    • I believe there was also some discussion of differences between fmriprep FD and Power FD at some point, and any approximate scaling factors should be determined and incorporated before this is merged. Please let me know if I can help with that process.
    • If the definition of FD changes (for instance ENH WIP: Use geodesic distance in FramewiseDisplacement nipy/nipype#2607), then default thresholds will also accordingly need to be changed.
    • It may also be worth considering whether to include an option on the front end for establishing thresholds or enabling/disabling motion spikes.
  • Motion outliers are currently saved in the confound matrix as spike regressors (in a separate column for each motion outlier, similar to non-steady state outliers). For some applications (e.g., Lomb-Scargle interpolation), a single temporal mask column (i.e., the union of all outlier columns) might be more useful.
  • In its current form, this will substantially increase the number of confounds in ~desc-confounds_regressors.tsv. For high-dimensional MB time series, CompCor-related confounds could number in the thousands, which may not be desirable.
  • There is no step to automatically remove zero-variance columns (which are completely unhelpful) from the confound model. This could be implemented fairly trivially (either here or at [ENH] CompCor enhancement nipy/nipype#2859). (These columns can occur if the dimensionality of the data is less than the dimensionality of a decomposition.)
  • Is there a way to arrest the CI when I push? It's probably not useful to run it for this PR yet, given that it depends on upstream PRs. (Alternatively, I can move this discussion to an issue -- perhaps multiple issues? -- and withhold pushing here for now.)

Documentation that should be reviewed

@oesteban
Copy link
Member

Hey @rciric, I think you might be hitting the error @effigies tried to solve with #1443. Should we integrate #1456 in this one?

@oesteban oesteban added this to In progress in pipelines Mar 6, 2019
@oesteban oesteban added effort: high Estimated high effort task impact: high Estimated high impact task labels Mar 13, 2019
@oesteban oesteban removed this from In progress in pipelines Mar 13, 2019
@rciric rciric force-pushed the master branch 2 times, most recently from fbae0b5 to e1fa3e4 Compare April 15, 2019 18:33
@oesteban
Copy link
Member

Hi @rciric,

For ds005 you are getting this error, which seems unrelated to your PR, IMHO:

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/plugins/multiproc.py", line 69, in run_node
    result['result'] = node.run(updatehash=updatehash)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 473, in run
    result = self._run_interface(execute=True)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 557, in _run_interface
    return self._run_command(execute)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/pipeline/engine/nodes.py", line 637, in _run_command
    result = self._interface.run(cwd=outdir)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 370, in run
    runtime = self._post_run_hook(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/niworkflows/interfaces/segmentation.py", line 91, in _post_run_hook
    return super(ReconAllRPT, self)._post_run_hook(runtime)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/mixins/reporting.py", line 47, in _post_run_hook
    self._generate_report()
  File "/usr/local/miniconda/lib/python3.7/site-packages/niworkflows/interfaces/report_base.py", line 134, in _generate_report
    compress=self.inputs.compress_report),
  File "/usr/local/miniconda/lib/python3.7/site-packages/niworkflows/viz/utils.py", line 389, in plot_registration
    display = plot_anat(anat_nii, **plot_params)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nilearn/plotting/img_plotting.py", line 526, in plot_anat
    vmin=vmin, vmax=vmax, cmap=cmap, **kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nilearn/plotting/img_plotting.py", line 303, in plot_img
    black_bg=black_bg, colorbar=colorbar, **kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nilearn/plotting/img_plotting.py", line 155, in _plot_img_with_bg
    img = _utils.check_niimg_3d(img, dtype='auto')
  File "/usr/local/miniconda/lib/python3.7/site-packages/nilearn/_utils/niimg_conversions.py", line 322, in check_niimg_3d
    return check_niimg(niimg, ensure_ndim=3, dtype=dtype)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nilearn/_utils/niimg_conversions.py", line 271, in check_niimg
    niimg = load_niimg(niimg, dtype=dtype)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nilearn/_utils/niimg.py", line 122, in load_niimg
    niimg.affine, copy_header=True)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nilearn/image/image.py", line 639, in new_img_like
    header['scl_slope'] = 0.
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/freesurfer/mghformat.py", line 503, in __setitem__
    super(MGHHeader, self).__setitem__(item, value)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nibabel/wrapstruct.py", line 326, in __setitem__
    self._structarr[item] = value
ValueError: no field of name scl_slope

I think that you'll need to cap the maximum version of nilearn because they are trying to set a NIfTI header on an mgz image. We should report this in their repo (basically, they should not assume that the image contained in a nibabel object is always NIfTI). WDYT @effigies ?

Then, you are getting this other error:

Traceback (most recent call last):
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap
    self.run()
  File "/usr/local/miniconda/lib/python3.7/multiprocessing/process.py", line 99, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/cli/run.py", line 800, in build_workflow
    dv_spike_thr=opts.dvars_spike_threshold,
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/base.py", line 231, in init_fmriprep_wf
    dv_spike_thr=dv_spike_thr,
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/base.py", line 543, in init_single_subject_wf
    num_bold=len(subject_data['bold']))
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/bold/base.py", line 465, in init_func_preproc_wf
    name='bold_confounds_wf')
  File "/usr/local/miniconda/lib/python3.7/site-packages/fmriprep/workflows/bold/confounds.py", line 214, in init_bold_confs_wf
    failure_mode='NaN'),
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/algorithms/confounds.py", line 686, in __init__
    super(TCompCor, self).__init__(*args, **kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/algorithms/confounds.py", line 493, in __init__
    super(CompCor, self).__init__(*args, **kwargs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/core.py", line 175, in __init__
    self.inputs = self.input_spec(**inputs)
  File "/usr/local/miniconda/lib/python3.7/site-packages/nipype/interfaces/base/specs.py", line 67, in __init__
    super(BaseTraitedSpec, self).__init__(**kwargs)
traits.trait_errors.TraitError: Cannot set the undefined 'save_metadata' attribute of a 'TCompCorInputSpec' object.

Presumably, you are not pinning niworkflows to the right commit. I think that you want to: 1) resync your niworkflows PR with upstream/master and 2) push that merge commit and use that commit hash for pinning. Please let me know if you need additional help with this.

@effigies
Copy link
Member

Yeah, they're really going hard on nifti, and for some reason setting the slope to 0... Neither seems like a great idea to me.

@effigies
Copy link
Member

Fixed in nilearn/nilearn#1973. But I thought we restricted nilearn to <0.5 anyway, because they made changes that caused Nipype's SignalExtraction to only output integers.

@oesteban
Copy link
Member

Shoot, I did not intend to close the PR. Could you reopen it @rciric ?

@effigies
Copy link
Member

LOL. @rciric, you might want to push your local master to a new branch before doing anything. I'm not sure what we've obliterated.

@oesteban
Copy link
Member

Yep, my git log does not look anywhere similar to what is supposed to be the tip now. My last commit is 31188bd

@effigies
Copy link
Member

As an aside, I'd recommend not making pull requests from your own master branch.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort: high Estimated high effort task impact: high Estimated high impact task
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants