-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WP] Add F-tests #195
[WP] Add F-tests #195
Conversation
As far as I can tell, F-tests should already be working at the first level (which is the only thing of any urgency to me). At higher levels, it looks like we are modeling the intercept so that wont work. |
What does it mean for F-tests to be working? What are you looking at to assess this? |
Correction: Just talked to Tal, and I think we've figured it out. To do a proper F-test it needs to be done at the dataset level, not at the first-level. Doing one at the first level is going to do the same linear weighting as if you just did an equivalent t-test-- that is give you the average (although the stats themselves might be of interest to someone that is doing single subject stats) So what that means is that at the second level, we need to create a design matrix which codes (as columns) the condition type of each input (rows) across subjects. In this ds003 example, there are 2 conditions and 3 subjects. That results in 6 effect images at the dataset level. For t-tests, since its only 1 dimension, we simply use the intercept to indicate which input effects belong to the relevant condition. So for example for a t-test of words:
For f-tests, this won't work, since its 2+ dimensional. Instead, we need to code condition identity as columns in the design matrix:. E.g.:
On this design matrix you could then compute one-sample t-tests:
or F-tests (here's an omnibus)
I'll update this PR with these changes. |
@effigies what I mean is that f-tests already can be passed in at the first level. But they don't really make sense to do in most scenarios, as we want F-tests at the dataset level. |
Codecov Report
@@ Coverage Diff @@
## master #195 +/- ##
==========================================
- Coverage 76.4% 76.26% -0.15%
==========================================
Files 18 18
Lines 1017 1011 -6
Branches 177 177
==========================================
- Hits 777 771 -6
Misses 150 150
Partials 90 90
Continue to review full report at Codecov.
|
Hello @adelavega, Thank you for updating!
To test for issues locally, Comment last updated at 2019-11-01 23:16:38 UTC |
I think this will work, but it's kind of ugly. I'm basically creating the dummy coded design matrices that I put above manually, from the unique condition names that are passed in as It seems like we should be using pybids more intelligently.
But when I look at the output of that, the only column that is relevant is The other minor issue is that now the weights are created based on |
Okay cleaned that up a bit. |
Fixed the docs and packaging tests on |
fitlins/interfaces/nistats.py
Outdated
names = [] | ||
for m, eff, var in zip(stat_metadata, input_effects, input_variances): | ||
for m, eff, var in zip(stat_metadata, input_effects): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for m, eff, var in zip(stat_metadata, input_effects): | |
for m, eff in zip(stat_metadata, input_effects): |
maps = model.compute_contrast( | ||
con_val=weights, | ||
contrast_type=contrast_type, | ||
second_level_stat_type=contrast_type, | ||
output_type='all') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These aren't valid arguments:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, I guess the args differ from the standalone function.
I wonder how you pass a 2d contrast matrix then? Anyways I'm out the rest of the week so I'll look at it Monday
Fixed some issues, updated the CI to use the example model from the current branch, so now we can inspect the outputs in the artifacts. |
Rebased, so you should rebase/reset before continuing. |
Nvm. This is a useful improvement already, and we can do more PRs. |
Closes #194
Only added an example model so far