-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add fixed effects - FEMA contrasts #191
Conversation
Hello @adelavega, Thank you for updating!
To test for issues locally, Comment last updated at 2019-12-11 05:41:06 UTC |
@effigies no rush on reviewing this, but unless nistats api changes, this looks good to me. the main point of contention is how to handle the fact that I also hard coded fixed effects at the subject level. maybe you will disagree with this? or it can be specified in the CLI? In any case this should be temporary until the StatsModel catches up. |
The other point of contention is that nistats fixed effects won't handle smoothing. we can do it ourselves using the masker object (and pass one in), or give a warning. currently, only silently ignoring. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, this will break (in a new and different way) the ds114 test-retest, where one session is contrasted against another. So at the very least we should condition it on being a pass-through intercept.
Should it also apply to session? Seems weird to do a random-effects combination at session, and then a fixed-effects at subject.
I think I would prefer that we go ahead and put this into the spec, and use the FEMA term. That way existing models continue behaving the same, and there's a way to explicitly specify fixed effects.
For smoothing, I would be inclined to use nilearn tools if reasonable, since nistats generally handles it internally.
Anyway some review comments while I'm here.
fitlins/interfaces/nistats.py
Outdated
{'intercept': weights[weights != 0]}) | ||
# For now hard-coding to do FEMA at the subject level | ||
# Pass-through happens automatically as it can handle 1 input | ||
if self.inputs.level == 'Subject': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't all of these get lowercased by pybids?
if self.inputs.level == 'Subject': | |
if self.inputs.level == 'subject': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is coming from model_dict
so no, its uppercase. I'm getting this value from the for loop used in workflow creation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I original had it lowercase and it didnt work.
Yes, I think this could also be done at the I would also like to put it in the spec for future reproducibility, but @tyarkoni indicated this might not happen for a while, so he suggested hard-coding it for now. About smoothing, we could very easily do what |
The ds114 test-retest is not part of the CircleCI tests, right? Should it be? |
Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
Can we not just do it? It's a draft spec. If it changes in the future, we adapt. |
And yeah, we can add ds114, if we upload a preprocessed copy of the data somewhere. I'd prefer with datalad, but whatever. |
I can host ds114 on our tacc servers |
… enh/fixed-effects
… enh/fixed-effects
Codecov Report
@@ Coverage Diff @@
## master #191 +/- ##
==========================================
+ Coverage 76.48% 77.05% +0.57%
==========================================
Files 18 18
Lines 1029 1046 +17
Branches 181 189 +8
==========================================
+ Hits 787 806 +19
Misses 150 150
+ Partials 92 90 -2
Continue to review full report at Codecov.
|
Allright, I've updated this branch to use: bids-standard/pybids#520 It now handles reading the "FEMA" contrast type from the bids model. (We also have a PR on Neuroscout to make these models) So all we really need now is to hammer out if my proposed changes to the spec draft are OK. cc: @tyarkoni when you get a chance, can you review the changes to the draft spec on google docs? there's no real urgency though. |
46a044d
to
cc018d6
Compare
Cancelling the job because it's broken. |
@effigies if tests pass, this is ready for a final review. To explain a bit more, previously we were computing a separate l2 model for each contrast, but this is not necessary. Instead you can make a dummy coded design matrix that represents the identity of each
You then just translate the weights (represented as dicts), to match the columns:
This works for F-tests too:
The weights argument is passed to nistats as |
Realized that the same logic was not being applied for That is, you could do a meta-analysis across multiple effects. Maybe its rare, but its not forbidden. So I added some (admittedly convoluted) logic to index the input effect and variance files based on the contrast weights. If you see a more elegant way to do it, feel free to suggest (but I'm trying to work with the contrast of making the |
ping @effigies |
fitlins/interfaces/nistats.py
Outdated
# Fit single model for all inputs | ||
model.fit(filtered_effects, design_matrix=design_matrix) | ||
# Only fit model if any non-FEMA contrasts at this level | ||
if any([c['type'] != 'FEMA' for c in self.inputs.contrast_info]): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if any([c['type'] != 'FEMA' for c in self.inputs.contrast_info]): | |
if any(c['type'] != 'FEMA' for c in self.inputs.contrast_info): |
Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
What happened to the
|
It's now part of the initial layout build. |
Okay, tested this with #202 merged in, and my model is running! This is ready to merge as far as I'm concerned. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Assuming the setup.cfg is more recent...
Can't commit for you. Feel free to merge once the dependency specifications are made consistent. |
Co-Authored-By: Chris Markiewicz <effigies@gmail.com>
Will do. I'm gonna have to bug to do another release soon though! |
This PR depends on: nilearn/nistats#386