Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dataset for nilearn tutorial #32

Closed
htwangtw opened this issue Jul 26, 2022 · 7 comments
Closed

Dataset for nilearn tutorial #32

htwangtw opened this issue Jul 26, 2022 · 7 comments

Comments

@htwangtw
Copy link

Currently we are directly converting the nilearn examples directly using nilearn.datasets
It would be good to expand it to a full analysis.
We will need to find a preprocessed dataset on task-fMRI for this.

@htwangtw
Copy link
Author

htwangtw commented Jul 27, 2022

Let's use ds000001 @yibeichan
It's only 16 subjects if we want to run the full process, and we can try to run a smaller subset (i.e. 5 subjects).

@yibeichan
Copy link
Collaborator

sounds good! I can experiment with 5 subs first

@yibeichan
Copy link
Collaborator

hello @htwangtw @effigies , this dataset has 3 runs for each subject, since our goal is a 2-level glm. We have two choices:

  1. concatenate three runs into one, run the first-level on this concatenated run, then do the second level
  2. do the first level for each run, average copes/varcopes across three runs, then do the second level.

I prefer the first solution, because the second one sounds like a three-level glm.

@htwangtw
Copy link
Author

2 is the correct way and how the data was analysed in the original paper! I don't think it's a bad thing to make it three-level. I would suggest to have a look at the original paper.

@yibeichan
Copy link
Collaborator

Yes, the original paper used FSL, so they did three-level GLM.
I understand that 2 is correct. But why 1 is not?
GLM is averaging across trials. So as long as we concatenate events.tsv, preproc_bold.nii.gz and confounds.tsv in the same way, then GLM on the concatenated run should be the same as GLM on each run and average?
Or because of the conjunction of each two runs will cause some differences (e.g., in terms of hrf)?
Sorry, I only did GLM 2-3 times; I am probably wrong

@htwangtw
Copy link
Author

It's okay to concatenate runs but you will have to make sure the signal are normalised, so they are comparing with the same baseline. The safest way is to do each run separately, and then combine the statistical maps across run.

@yibeichan
Copy link
Collaborator

Ah I see, I'll go for option 2 then, much safer. Thank you!

@djarecka djarecka closed this as completed Oct 7, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants