-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] integrate load_confounds into first_level_from_bids #4103
Merged
Merged
Changes from all commits
Commits
Show all changes
21 commits
Select commit
Hold shift + click to select a range
fb14470
start integrating loading of confounds
Remi-Gau 84914ea
load confounds for glm
Remi-Gau a3ff639
updating black config
Remi-Gau ecce8e2
fix tests
Remi-Gau 659794a
match number of time points to that in confounds file
Remi-Gau 815bdc0
Merge remote-tracking branch 'upstream/main' into bids_glm_confounds
Remi-Gau a21a4c4
fix merge conflicts
Remi-Gau eb16889
update changelog
Remi-Gau 56e580d
update doc
Remi-Gau 4d1cc24
fix typos
Remi-Gau 23444e5
Apply suggestions from code review
Remi-Gau a79055e
rename output var in doc string
Remi-Gau 103e326
Merge remote-tracking branch 'upstream/main' into bids_glm_confounds
Remi-Gau c4089c8
add warning
Remi-Gau 9f77bcc
improve warning
Remi-Gau b457a11
rename variables
Remi-Gau c043f78
check names of confounds
Remi-Gau a62c27c
fix hasty ccopy paste
Remi-Gau 93ab4f7
Merge branch 'main' into bids_glm_confounds
Remi-Gau a8a1eb1
lint
Remi-Gau d3588e1
Merge remote-tracking branch 'upstream/main' into bids_glm_confounds
Remi-Gau File filter
Filter by extension
Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,39 @@ | ||
from nilearn.interfaces.bids._utils import _bids_entities | ||
|
||
|
||
def create_bids_filename(fields, entities_to_include=None): | ||
"""Create BIDS filename from dictionary of entity-label pairs. | ||
|
||
Parameters | ||
---------- | ||
fields : :obj:`dict` of :obj:`str` | ||
Dictionary of entity-label pairs, for example: | ||
|
||
{ | ||
"suffix": "T1w", | ||
"extension": "nii.gz", | ||
"entities": {"acq": "ap", | ||
"desc": "preproc"} | ||
}. | ||
|
||
Returns | ||
------- | ||
BIDS filename : :obj:`str` | ||
|
||
""" | ||
if entities_to_include is None: | ||
entities_to_include = _bids_entities()["raw"] | ||
|
||
filename = "" | ||
|
||
for key in entities_to_include: | ||
if key in fields["entities"]: | ||
value = fields["entities"][key] | ||
if value not in (None, ""): | ||
filename += f"{key}-{value}_" | ||
if "suffix" in fields: | ||
filename += f"{fields['suffix']}" | ||
if "extension" in fields: | ||
filename += f".{fields['extension']}" | ||
|
||
return filename |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -15,7 +15,9 @@ | |
|
||
from nilearn import datasets, image, maskers, masking | ||
from nilearn._utils import as_ndarray, logger | ||
from nilearn._utils.bids import create_bids_filename | ||
from nilearn.interfaces.bids._utils import _bids_entities, _check_bids_label | ||
from nilearn.interfaces.fmriprep.tests._testing import get_legal_confound | ||
|
||
|
||
def generate_mni_space_img(n_scans=1, res=30, random_state=0, mask_dilation=2): | ||
|
@@ -736,9 +738,15 @@ def _basic_confounds(length, random_state=0): | |
|
||
""" | ||
rand_gen = np.random.default_rng(random_state) | ||
columns = ['csf', 'white_matter', 'global_signal', | ||
'rot_x', 'rot_y', 'rot_z', | ||
'trans_x', 'trans_y', 'trans_z'] | ||
columns = ['csf', | ||
'white_matter', | ||
'global_signal', | ||
'rot_x', | ||
'rot_y', | ||
'rot_z', | ||
'trans_x', | ||
'trans_y', | ||
'trans_z'] | ||
data = rand_gen.random((length, len(columns))) | ||
confounds = pd.DataFrame(data, columns=columns) | ||
return confounds | ||
|
@@ -1200,46 +1208,6 @@ def _listify(n): | |
return [""] if n <= 0 else [f"{label:02}" for label in range(1, n + 1)] | ||
|
||
|
||
def _create_bids_filename( | ||
fields, entities_to_include=None | ||
): | ||
"""Create BIDS filename from dictionary of entity-label pairs. | ||
|
||
Parameters | ||
---------- | ||
fields : :obj:`dict` of :obj:`str` | ||
Dictionary of entity-label pairs, for example: | ||
|
||
{ | ||
"suffix": "T1w", | ||
"extension": "nii.gz", | ||
"entities": {"acq": "ap", | ||
"desc": "preproc"} | ||
}. | ||
|
||
Returns | ||
------- | ||
BIDS filename : :obj:`str` | ||
|
||
""" | ||
if entities_to_include is None: | ||
entities_to_include = _bids_entities()["raw"] | ||
|
||
filename = "" | ||
|
||
for key in entities_to_include: | ||
if key in fields["entities"]: | ||
value = fields["entities"][key] | ||
if value not in (None, ""): | ||
filename += f"{key}-{value}_" | ||
if "suffix" in fields: | ||
filename += f"{fields['suffix']}" | ||
if "extension" in fields: | ||
filename += f".{fields['extension']}" | ||
|
||
return filename | ||
|
||
|
||
def _init_fields(subject, | ||
session, | ||
task, | ||
|
@@ -1267,7 +1235,7 @@ def _init_fields(subject, | |
|
||
See Also | ||
-------- | ||
_create_bids_filename | ||
create_bids_filename | ||
|
||
""" | ||
fields = { | ||
|
@@ -1304,7 +1272,7 @@ def _write_bids_raw_anat(subses_dir, subject, session) -> None: | |
"extension": "nii.gz", | ||
"entities": {"sub": subject, "ses": session}, | ||
} | ||
(anat_path / _create_bids_filename(fields)).write_text("") | ||
(anat_path / create_bids_filename(fields)).write_text("") | ||
|
||
|
||
def _write_bids_raw_func( | ||
|
@@ -1331,8 +1299,9 @@ def _write_bids_raw_func( | |
Random number generator. | ||
|
||
""" | ||
n_time_points = 100 | ||
bold_path = func_path / _create_bids_filename(fields) | ||
n_time_points = 30 | ||
bold_path = func_path / create_bids_filename(fields) | ||
|
||
write_fake_bold_img( | ||
bold_path, | ||
[n_voxels, n_voxels, n_voxels, n_time_points], | ||
|
@@ -1341,12 +1310,12 @@ def _write_bids_raw_func( | |
|
||
repetition_time = 1.5 | ||
fields["extension"] = "json" | ||
param_path = func_path / _create_bids_filename(fields) | ||
param_path = func_path / create_bids_filename(fields) | ||
param_path.write_text(json.dumps({"RepetitionTime": repetition_time})) | ||
|
||
fields["suffix"] = "events" | ||
fields["extension"] = "tsv" | ||
events_path = func_path / _create_bids_filename(fields) | ||
events_path = func_path / create_bids_filename(fields) | ||
basic_paradigm().to_csv(events_path, sep="\t", index=None) | ||
|
||
|
||
|
@@ -1387,17 +1356,20 @@ def _write_bids_derivative_func( | |
or "desc-confounds_regressors". | ||
|
||
""" | ||
n_time_points = 100 | ||
n_time_points = 30 | ||
|
||
if confounds_tag is not None: | ||
fields["suffix"] = confounds_tag | ||
fields["extension"] = "tsv" | ||
confounds_path = func_path / _create_bids_filename( | ||
confounds_path = func_path / create_bids_filename( | ||
fields=fields, entities_to_include=_bids_entities()["raw"] | ||
) | ||
_basic_confounds(length=n_time_points, random_state=rand_gen).to_csv( | ||
confounds_path, sep="\t", index=None | ||
confounds, metadata = get_legal_confound() | ||
confounds.to_csv( | ||
confounds_path, sep="\t", index=None, encoding="utf-8" | ||
) | ||
with open(confounds_path.with_suffix(".json"), "w") as f: | ||
json.dump(metadata, f) | ||
Comment on lines
+1371
to
+1372
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. minor change: using legal_confounds allows to add metada files for the confounds in the fake bids derivatives. some tests had to be changed to account for this. |
||
|
||
fields["suffix"] = "bold" | ||
fields["extension"] = "nii.gz" | ||
|
@@ -1418,7 +1390,7 @@ def _write_bids_derivative_func( | |
fields["entities"]["space"] = space | ||
fields["entities"]["desc"] = desc | ||
|
||
bold_path = func_path / _create_bids_filename( | ||
bold_path = func_path / create_bids_filename( | ||
fields=fields, entities_to_include=entities_to_include | ||
) | ||
write_fake_bold_img(bold_path, shape=shape, random_state=rand_gen) | ||
|
@@ -1428,7 +1400,7 @@ def _write_bids_derivative_func( | |
fields["entities"].pop("desc") | ||
for hemi in ["L", "R"]: | ||
fields["entities"]["hemi"] = hemi | ||
gifti_path = func_path / _create_bids_filename( | ||
gifti_path = func_path / create_bids_filename( | ||
fields=fields, | ||
entities_to_include=entities_to_include | ||
) | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One issue with this approach is that the "legal_confounds" have a set number of time points so when creating a fake bids dataset, we end up with images that have a number of time points that does not match the number of time points in the confounds.
This does not affect any tests AFIACT but this may lead to confusing errors when testing down the line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You mean, because of scrubbing ? Sorry if I miss something obvious.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nope
let me try to rephrase
the way to generate "fake confounds" for the fake fmriprep datasets we use for testing would only create 6 confounds for the realignment parameters filled with random data and for a specified number of time points
to allow testing the load_confounds we need more realistic confounds with more columns with names that match what's in an actual fmriprep dataset
to do this I reuse the strategy used to test the load_confounds functions: use an actual confound file from an fmriprep dataset and copy its content every time it is needed in the fake fmriprep dataset
but this "template" confound file has only a limited number of time points
so we end up with fake fmriprep datasets that have nifti images with 100 volumes but with confounds with only 30 time points
possible solutions:
hope this is clearer
for now I will go for the easy solution but we may have to implement the harder option in the future if we want to test more "exotic" stuff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's go for the easy one. The number of volumes should be a parameters of the data simulation function anyhow ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for now it is not: it is hard coded. I would keep it that way for now until we need more flexibility during testing.
but I will change the place where it is hard coded so it is easier to adapt in the future. will also add a comment to explain why this value was chosen.