Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add fixed effects - FEMA contrasts #191

Merged
merged 37 commits into from Dec 11, 2019

Conversation

adelavega
Copy link
Collaborator

@adelavega adelavega commented Oct 2, 2019

This PR depends on: nilearn/nistats#386

  • This adds fixed effects contrasts for "FEMA" contrasts in the BIDS StatsModel.
  • Pass-through for cases where there is only a single effect/variance per subject. This works automatically since nistats fixed effects can accept one input. Fixes Handle subject level when there is only 1 run #170
  • nistats fixed effects does not output p or z values, because that work required the degrees of freedom from the previous level. we might want to add this in the future, but for now, to deal with this, made these two optional throughout the workflow (e.g. in collate functions)

@pep8speaks
Copy link

pep8speaks commented Oct 2, 2019

Hello @adelavega, Thank you for updating!

Line 138:15: E126 continuation line over-indented for hanging indent
Line 236:21: E123 closing bracket does not match indentation of opening bracket's line
Line 242:21: E123 closing bracket does not match indentation of opening bracket's line
Line 248:21: E123 closing bracket does not match indentation of opening bracket's line

To test for issues locally, pip install flake8 and then run flake8 fitlins.

Comment last updated at 2019-12-11 05:41:06 UTC

@adelavega adelavega marked this pull request as ready for review October 4, 2019 16:48
@adelavega
Copy link
Collaborator Author

@effigies no rush on reviewing this, but unless nistats api changes, this looks good to me.

the main point of contention is how to handle the fact that z and p stats are not output by fixed effects.

I also hard coded fixed effects at the subject level. maybe you will disagree with this? or it can be specified in the CLI? In any case this should be temporary until the StatsModel catches up.

@adelavega
Copy link
Collaborator Author

The other point of contention is that nistats fixed effects won't handle smoothing. we can do it ourselves using the masker object (and pass one in), or give a warning. currently, only silently ignoring.

Copy link
Collaborator

@effigies effigies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, this will break (in a new and different way) the ds114 test-retest, where one session is contrasted against another. So at the very least we should condition it on being a pass-through intercept.

Should it also apply to session? Seems weird to do a random-effects combination at session, and then a fixed-effects at subject.

I think I would prefer that we go ahead and put this into the spec, and use the FEMA term. That way existing models continue behaving the same, and there's a way to explicitly specify fixed effects.

For smoothing, I would be inclined to use nilearn tools if reasonable, since nistats generally handles it internally.

Anyway some review comments while I'm here.

fitlins/interfaces/nistats.py Outdated Show resolved Hide resolved
fitlins/interfaces/nistats.py Outdated Show resolved Hide resolved
fitlins/interfaces/nistats.py Outdated Show resolved Hide resolved
{'intercept': weights[weights != 0]})
# For now hard-coding to do FEMA at the subject level
# Pass-through happens automatically as it can handle 1 input
if self.inputs.level == 'Subject':
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't all of these get lowercased by pybids?

Suggested change
if self.inputs.level == 'Subject':
if self.inputs.level == 'subject':

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is coming from model_dict so no, its uppercase. I'm getting this value from the for loop used in workflow creation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I original had it lowercase and it didnt work.

fitlins/interfaces/nistats.py Outdated Show resolved Hide resolved
@adelavega
Copy link
Collaborator Author

Yes, I think this could also be done at the session level...

I would also like to put it in the spec for future reproducibility, but @tyarkoni indicated this might not happen for a while, so he suggested hard-coding it for now.

About smoothing, we could very easily do what nistats does internally. They use the NiftiMasker object. We would simply have to initialize one (with the smoothing parameters), and then transform the images. The reason that Bertrand doesn't make to support it in fixed effects, is he said it would not make sense to do at the subject level so he would rather discourage it heavily. We could follow a similar principle, or leave it up to the user.
See: nilearn/nistats#386 (comment)

@adelavega
Copy link
Collaborator Author

The ds114 test-retest is not part of the CircleCI tests, right? Should it be?

Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
@effigies
Copy link
Collaborator

effigies commented Oct 4, 2019

@tyarkoni indicated this might not happen for a while, so he suggested hard-coding it for now.

Can we not just do it? It's a draft spec. If it changes in the future, we adapt.

@effigies
Copy link
Collaborator

effigies commented Oct 4, 2019

And yeah, we can add ds114, if we upload a preprocessed copy of the data somewhere. I'd prefer with datalad, but whatever.

@adelavega
Copy link
Collaborator Author

I can host ds114 on our tacc servers

@codecov-io
Copy link

codecov-io commented Oct 22, 2019

Codecov Report

Merging #191 into master will increase coverage by 0.57%.
The diff coverage is 86.84%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #191      +/-   ##
==========================================
+ Coverage   76.48%   77.05%   +0.57%     
==========================================
  Files          18       18              
  Lines        1029     1046      +17     
  Branches      181      189       +8     
==========================================
+ Hits          787      806      +19     
  Misses        150      150              
+ Partials       92       90       -2
Flag Coverage Δ
#ds003 77.05% <86.84%> (+0.57%) ⬆️
Impacted Files Coverage Δ
fitlins/interfaces/utils.py 83.07% <75%> (+0.53%) ⬆️
fitlins/interfaces/nistats.py 82.89% <92.3%> (+2.6%) ⬆️
fitlins/interfaces/bids.py 73.33% <0%> (+0.37%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update ad0a153...8c054f3. Read the comment docs.

@adelavega
Copy link
Collaborator Author

adelavega commented Oct 22, 2019

Allright, I've updated this branch to use: bids-standard/pybids#520

It now handles reading the "FEMA" contrast type from the bids model.
I tested it on two models in two datasets and its seemed to work fine.
Passthrough at the Subject level using "FEMA" with only a single run per subject also worked.

(We also have a PR on Neuroscout to make these models)

So all we really need now is to hammer out if my proposed changes to the spec draft are OK.
No rush to review this @effigies, but its ready.

cc: @tyarkoni when you get a chance, can you review the changes to the draft spec on google docs? there's no real urgency though.

@effigies
Copy link
Collaborator

effigies commented Dec 4, 2019

Cancelling the job because it's broken.

@adelavega
Copy link
Collaborator Author

adelavega commented Dec 5, 2019

@effigies if tests pass, this is ready for a final review.

To explain a bit more, previously we were computing a separate l2 model for each contrast, but this is not necessary.

Instead you can make a dummy coded design matrix that represents the identity of each effect_map.

effect_file cond1 cond2
cond1_effect_size.nii.gz 1 0
cond2_effect_size.nii.gz 0 1
cond1_effect_size.nii.gz 1 0
cond2_effect_size.nii.gz 0 1

You then just translate the weights (represented as dicts), to match the columns:

{name='cond1', weights=[1, 0], type='t'}

This works for F-tests too:

{name='omnibus', weights=[[1, 0], [0, 1]], type='F'}

The weights argument is passed to nistats as second_level_contrast

@adelavega
Copy link
Collaborator Author

Realized that the same logic was not being applied for FEMA contrasts. For those, it's typically a Dummy contrast, but in theory it could be more than that, something like: weights: [1, 1, 0, 0].

That is, you could do a meta-analysis across multiple effects. Maybe its rare, but its not forbidden.

So I added some (admittedly convoluted) logic to index the input effect and variance files based on the contrast weights. If you see a more elegant way to do it, feel free to suggest (but I'm trying to work with the contrast of making the prepare_contrasts function work for both level 1 and 2).

@adelavega
Copy link
Collaborator Author

ping @effigies

# Fit single model for all inputs
model.fit(filtered_effects, design_matrix=design_matrix)
# Only fit model if any non-FEMA contrasts at this level
if any([c['type'] != 'FEMA' for c in self.inputs.contrast_info]):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if any([c['type'] != 'FEMA' for c in self.inputs.contrast_info]):
if any(c['type'] != 'FEMA' for c in self.inputs.contrast_info):

fitlins/interfaces/abstract.py Outdated Show resolved Hide resolved
fitlins/interfaces/nistats.py Outdated Show resolved Hide resolved
fitlins/interfaces/nistats.py Outdated Show resolved Hide resolved
fitlins/interfaces/nistats.py Outdated Show resolved Hide resolved
fitlins/interfaces/nistats.py Outdated Show resolved Hide resolved
fitlins/interfaces/nistats.py Outdated Show resolved Hide resolved
fitlins/interfaces/utils.py Show resolved Hide resolved
fitlins/workflows/base.py Outdated Show resolved Hide resolved
adelavega and others added 3 commits December 10, 2019 16:23
Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
@adelavega
Copy link
Collaborator Author

What happened to the ignore argument in the LoadBIDSModel input spec?

   load_entry_point('neuroscout-cli', 'console_scripts', 'neuroscout')()
  File "/src/neuroscout/neuroscout_cli/cli.py", line 61, in main
    command(deepcopy(args)).run()
  File "/src/neuroscout/neuroscout_cli/commands/run.py", line 57, in run
    retcode = run_fitlins(fitlins_args)
  File "/opt/miniconda-latest/envs/neuro/lib/python3.6/site-packages/fitlins/cli/run.py", line 236, in run_fitlins
    smoothing=opts.smoothing, drop_missing=opts.drop_missing,
  File "/opt/miniconda-latest/envs/neuro/lib/python3.6/site-packages/fitlins/workflows/base.py", line 43, in init_fitlins_wf
    loader.inputs.ignore = ignore
traits.trait_errors.TraitError: Cannot set the undefined 'ignore' attribute of a 'LoadBIDSModelInputSpec' object.

@effigies
Copy link
Collaborator

It's now part of the initial layout build.

@adelavega
Copy link
Collaborator Author

Okay, tested this with #202 merged in, and my model is running!

This is ready to merge as far as I'm concerned.

Copy link
Collaborator

@effigies effigies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming the setup.cfg is more recent...

requirements.txt Outdated Show resolved Hide resolved
@effigies
Copy link
Collaborator

Can't commit for you. Feel free to merge once the dependency specifications are made consistent.

Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
@adelavega
Copy link
Collaborator Author

Will do. I'm gonna have to bug to do another release soon though!

@adelavega adelavega merged commit 6ad289d into poldracklab:master Dec 11, 2019
@adelavega adelavega deleted the enh/fixed-effects branch December 11, 2019 07:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Handle subject level when there is only 1 run
4 participants