Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WP] Add F-tests #195

Merged
merged 9 commits into from Nov 2, 2019
Merged

[WP] Add F-tests #195

merged 9 commits into from Nov 2, 2019

Conversation

adelavega
Copy link
Collaborator

Closes #194

Only added an example model so far

@adelavega
Copy link
Collaborator Author

As far as I can tell, F-tests should already be working at the first level (which is the only thing of any urgency to me).

At higher levels, it looks like we are modeling the intercept so that wont work.

@effigies
Copy link
Collaborator

What does it mean for F-tests to be working? What are you looking at to assess this?

@adelavega
Copy link
Collaborator Author

Correction: Just talked to Tal, and I think we've figured it out.

To do a proper F-test it needs to be done at the dataset level, not at the first-level. Doing one at the first level is going to do the same linear weighting as if you just did an equivalent t-test-- that is give you the average (although the stats themselves might be of interest to someone that is doing single subject stats)

So what that means is that at the second level, we need to create a design matrix which codes (as columns) the condition type of each input (rows) across subjects.

In this ds003 example, there are 2 conditions and 3 subjects. That results in 6 effect images at the dataset level.

For t-tests, since its only 1 dimension, we simply use the intercept to indicate which input effects belong to the relevant condition. So for example for a t-test of words:

file_name intercept
sub1_word 1
sub2_word 1
sub3_word 1
sub1_pseudoword 0
sub2_pseudoword 0
sub3_pseudoword 0

For f-tests, this won't work, since its 2+ dimensional. Instead, we need to code condition identity as columns in the design matrix:. E.g.:

file_name word pseudo-word
sub1_word 1 0
sub2_word 1 0
sub3_word 1 0
sub1_pseudoword 0 1
sub2_pseudoword 0 1
sub3_pseudoword 0 1

On this design matrix you could then compute one-sample t-tests:

model.compute_contrast('word', type='t')
or:
mode.compute_contrast([1, 0], type='t')

or F-tests (here's an omnibus)

model.compute_contrast([[1, 0], [0, 1]], type='F')

I'll update this PR with these changes.

@adelavega
Copy link
Collaborator Author

@effigies what I mean is that f-tests already can be passed in at the first level. But they don't really make sense to do in most scenarios, as we want F-tests at the dataset level.

@codecov-io
Copy link

codecov-io commented Oct 30, 2019

Codecov Report

Merging #195 into master will decrease coverage by 0.14%.
The diff coverage is 66.66%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #195      +/-   ##
==========================================
- Coverage    76.4%   76.26%   -0.15%     
==========================================
  Files          18       18              
  Lines        1017     1011       -6     
  Branches      177      177              
==========================================
- Hits          777      771       -6     
  Misses        150      150              
  Partials       90       90
Flag Coverage Δ
#ds003 76.26% <66.66%> (-0.15%) ⬇️
Impacted Files Coverage Δ
fitlins/interfaces/bids.py 74.1% <ø> (ø) ⬆️
fitlins/workflows/base.py 60.43% <ø> (ø) ⬆️
fitlins/interfaces/nistats.py 80.91% <66.66%> (-0.84%) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 47165d4...5a6bda4. Read the comment docs.

@pep8speaks
Copy link

pep8speaks commented Oct 30, 2019

Hello @adelavega, Thank you for updating!

Line 202:11: E121 continuation line under-indented for hanging indent

To test for issues locally, pip install flake8 and then run flake8 fitlins.

Comment last updated at 2019-11-01 23:16:38 UTC

@adelavega
Copy link
Collaborator Author

I think this will work, but it's kind of ugly. I'm basically creating the dummy coded design matrices that I put above manually, from the unique condition names that are passed in as stat_metadata.

It seems like we should be using pybids more intelligently.

analysis.steps[-1].get_design_matrix

But when I look at the output of that, the only column that is relevant is condition (which I'd then dummy code), and I can get from names.

The other minor issue is that now the weights are created based on set(names) because the weights of the contrast are now relative to the design matrix column, nor rows (like before). But for #191 the "weights" for fixed-effects are still rows, so we'll have to integrate that.

@adelavega
Copy link
Collaborator Author

Okay cleaned that up a bit.

@effigies
Copy link
Collaborator

Fixed the docs and packaging tests on master. I'll look into the F-tests locally now.

names = []
for m, eff, var in zip(stat_metadata, input_effects, input_variances):
for m, eff, var in zip(stat_metadata, input_effects):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
for m, eff, var in zip(stat_metadata, input_effects):
for m, eff in zip(stat_metadata, input_effects):

maps = model.compute_contrast(
con_val=weights,
contrast_type=contrast_type,
second_level_stat_type=contrast_type,
output_type='all')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I guess the args differ from the standalone function.

I wonder how you pass a 2d contrast matrix then? Anyways I'm out the rest of the week so I'll look at it Monday

@effigies
Copy link
Collaborator

effigies commented Nov 1, 2019

Fixed some issues, updated the CI to use the example model from the current branch, so now we can inspect the outputs in the artifacts.

Example: https://1571-108019675-gh.circle-artifacts.com/0/tmp/ds003/derivatives/fitlins/reports/model-ds003Model001.html

@effigies
Copy link
Collaborator

effigies commented Nov 1, 2019

Rebased, so you should rebase/reset before continuing.

@effigies effigies merged commit 54ea905 into poldracklab:master Nov 2, 2019
@effigies
Copy link
Collaborator

effigies commented Nov 2, 2019

Nvm. This is a useful improvement already, and we can do more PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add F-test contrasts
4 participants