Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integration of load_confounds to nilearn #2777

Closed
htwangtw opened this issue May 1, 2021 · 21 comments · Fixed by #3016
Closed

Integration of load_confounds to nilearn #2777

htwangtw opened this issue May 1, 2021 · 21 comments · Fixed by #3016
Assignees
Labels
Enhancement for feature requests

Comments

@htwangtw
Copy link
Member

htwangtw commented May 1, 2021

The load_confounds team would like to see load_confounds added to nilearn! Link to the repository
load_confounds is a small tool to load fMRIprep confound timeseries based on methods benchmarked in Ciric et. al., 2017. The output can be passed toNiftiMasker and related functions directly.
It's a bit small to be a stand-alone project, and the integration will help more users!

Benefits to the change

load_confounds is a great addition to nilearn for the following reasons that I can think of:

  1. Bridging fMRIprep outputs and nilearn analysis functions:
    We want to make the confound selection strategy more streamlined and reduce possible errors introduced at this stage. I am very certain there are many people who loaded the full fMRIprep confound output file as the confound regressors. load_confounds would be a helpful addition to that and the existing function that handles confound variables. There's already precedence of the new GLM with fMRIprep/BIDS specific supports. load_confounds can smoothen the process.

  2. Almost completed project with good code quality:
    The integration will not be a lot of work. load_confounds is almost a finished product with good code quality. There's one more strategy to be added on the load_confounds side. Some changes in NifitMasker and GLM first-level analysis can make the integration work smoother. The changes will allow volume censoring performed correctly.

    • NifitMasker: move sample_mask to the transform method rather than at the initialisation stage. This will allow scrubbing outliers per image, along with non-steady-state volumes.
    • GLM: add sample_mask to first level GLM for removing non-steady-state volumes and scrubbing motion outliers.

Pseudocode for the new behavior, if applicable

This is what I expect to see when load_confounds is merged into nilearn, using NiftiMasker as an example:

from nilearn.load_confounds import Params36
from nilearn.input_data import NiftiMasker 

# load_confounds auto-detects the companion .tsv file (which needs to be in the same directory)
file = "path/to/file/sub-01_ses-001_bold.nii.gz"
confounds, sample_mask = Params36().load(file)

# Use the confounds and sample_mask in a nilearn masker 
# sample mask is defined at initialisation stage in the current implementation
# We would move it to the transform methods
masker = NiftiMasker(smoothing_fwhm=5, standardize=True)
img = masker.fit_transform(file, confounds=confounds, sample_mask=sample_mask)

It would be great to know what the nilearn contributors think! Any feedback would be extremely helpful!
cc'd @pbellec

@htwangtw htwangtw added the Enhancement for feature requests label May 1, 2021
@bthirion
Copy link
Member

bthirion commented May 2, 2021

Thx for the suggestion !
I think I like you suggestion. Facilitating the fMRIPREP to Niilearn transition is certainly a great addition.

Note however that changes breaking the API will have two wait for two release cycles ! Also, we may have to adapt the terminology slightly. Are you taking part to the coding sprint this week ?
Best,

@htwangtw
Copy link
Member Author

htwangtw commented May 2, 2021

Yes I have signed up to the sprint! Happy to discuss more.
(Also I have not yet got details about the discord invite. I hope I didn't accidentally miss the email.)

@emdupre
Copy link
Member

emdupre commented May 3, 2021

(Also I have not yet got details about the discord invite. I hope I didn't accidentally miss the email.)

You should have gotten an email this evening; let us know if not. Details about the accessing the Discord server are also on the event website !

@pbellec
Copy link
Contributor

pbellec commented May 5, 2021

quick note: if this integration moves forward, this will solve several old standing issues

@htwangtw
Copy link
Member Author

htwangtw commented May 5, 2021

Thanks @pbellec for pining the existing issues - happy to discuss what should we prioritise to move this project forward.

@emdupre
Copy link
Member

emdupre commented May 5, 2021

There will be a group discussion on this tomorrow at 3p UTC [find your local time here] as part of the Nilearn-Nibabel Dev Days ! For more information and to join, see #2739.

@htwangtw
Copy link
Member Author

htwangtw commented May 6, 2021

Summary of discussions from Nilearn-Nibabel Dev Days 2021

Thanks everyone for the extremely helpful discussion today!

To start the integration, we will resolve some standing issues in the existing nilearn code base.

Integrating load_confounds will:

Things for future discussion:

  • More intuitive names for the preset load_confounds strategy
  • Default strategy?

Action plan

  • Resolve existing bugs in nilearn
  • Update test data with fMRIprep LTS outputs
  • Finish the remaining strategy in load_confounds
  • Integration of load_confounds

Please feel free to comment and add anything that I missed in the summary 🙌

@bthirion
Copy link
Member

bthirion commented May 6, 2021

Thx ! The summary and the discussion have been great.
Simply one point: I'm worried about adding another dataset for nilearn doc generation, as this is a well-known cause of failure. If we add one, we should remove another one ?
Or is it possible to get the fmriprep-generated confounds file for one of the existing datasets (e.g. developmental fMRI) ?

@htwangtw
Copy link
Member Author

htwangtw commented May 7, 2021

That's a really important point - and thanks for bringing up the error-prone part!
To finish up load_confounds, I need fMRIprep LTS processed data to create new test data. That in itself will not affect things on nilearn.
On nilearn side, @emdupre mentioned in the discussion to rerun the existing developmental fMR dataset with fMRIprepLTS. This is going to be useful for updating this example to include more confound strategies.

@pbellec
Copy link
Contributor

pbellec commented May 7, 2021

thanks a lot for the summary @htwangtw and the useful feedback @bthirion (and others who commented during the sprint). Reflecting on the summary, I am thinking it may be worth:

  • pushing for integrating load_confounds as it is currently implemented, that is purely through the confounds argument of the transform methods of maskers. This would require selecting a smaller set of strategies (with clear motivation for each), and then refactor the code & docs for integration in the nilearn codebase.
  • in parallel, address nilearn issues with sample_mask. This implies to deal with sample_mask for filtering time series.

When both of these sets of issues are addressed, we could work on an extension of load_confounds that includes strategies using sample_mask (that would be traditional scrubbing as well as suppression of non-steady state volumes).

@htwangtw
Copy link
Member Author

I really like this idea - the current version of load_confounds is already working well. I can start with implementing strategies that don't require sample_mask and look into the code of sample_mask in nilearn!

@NicolasGensollen
Copy link
Member

Linking this issue on load_confounds.

Now that #2821 is merged, I think we can move to the next step. It would certainly be easier to break the integration into several small PRs rather than doing a very large one. (pinging @htwangtw since I think you are currently working on this).

@pbellec I agree that we need to acknowledge all contributors who worked on load_confounds (having multi-authored PRs is definitely one way to do it).
I am not sure keeping the history of load_confounds would be desirable or even possible since I don't think we will be integrating it as a new module in nilearn. @bthirion @jeromedockes WDYT?

@htwangtw
Copy link
Member Author

Thanks for the heads up!

I am wondering if it's possible to do the same thing as in #2299 #2298 for load_confounds? Looks like that is cc'd @pbellec

To break down the integration, we can separate the base class Confounds and the strategies building on top of it. It should be quite simple to merge code related to Confounds and leave the strategies to later.

Key remaining changes are related to Confounds, such as sample_mask. Most of the changes in API on nilearn side would be tide to this part.

The strategies and relevant docs should be revised based on the discussion we had with the community during the dev days. This is more or less a stand-alone task if we base Confounds class is finalised.

@NicolasGensollen
Copy link
Member

Hi @htwangtw

I was wondering if you were still working on this or if there was something blocking at the moment?
Not pushing if you don't have the bandwidth right now, but if there is something I can help you with, just let me know! 😉

@htwangtw
Copy link
Member Author

htwangtw commented Sep 2, 2021

Thanks for checking in! On the load_confounds side we are finalising the sample_mask API (SIMEXP/load_confounds#142). Once that's complete, it would be a good time to start the migration. If you think there's something I can do on the nilearn side, happy to discuss!

@NicolasGensollen
Copy link
Member

@htwangtw Cool to hear that things are moving forward on the load_confounds's side! 👍

If you think there's something I can do on the nilearn side, happy to discuss!

I don't have anything particular in mind, I was just checking in. Let's wait for your work on load_confounds to be done first then!

@htwangtw
Copy link
Member Author

htwangtw commented Sep 3, 2021

@NicolasGensollen Things moves fast and I think the code base is ready for the migration!
I am going to submit a PR just to put the full code base at the root level of nilearn and starts migrating. Does this sound good?

@NicolasGensollen
Copy link
Member

@htwangtw Awesome ! 💯
Yes, working on a PR sounds good. Do you have a rough plan on how to break the migration in several PRs?

@htwangtw
Copy link
Member Author

htwangtw commented Sep 6, 2021

@htwangtw Awesome ! 100
Yes, working on a PR sounds good. Do you have a rough plan on how to break the migration in several PRs?

load_confounds is already pretty compact and I have kept it up-to-date with the recent nilearn changes. If we are to break it down, I am thinking the basic Confounds class (and associated tests) so we can address any potential API changes. This involves parser.py , confounds.py, and compcor.py.

And then the predefined strategies (strategies.py) and adapt them with any changes occurred in the previous PR.

@htwangtw
Copy link
Member Author

htwangtw commented Oct 6, 2021

Just a heads up - I am using the functions defined in PR #2946 to draft how the strategy can be implemented. I came up with two possible ways and happy to discuss them at the next office hour

@bthirion
Copy link
Member

bthirion commented Oct 6, 2021

Looking fw to it, thx.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement for feature requests
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants