Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can you provide functions that reconstruct images from values within parcels? #7

Open
chrisgorgo opened this issue Apr 21, 2018 · 3 comments
Labels
answered The question has been answered question Further information is requested

Comments

@chrisgorgo
Copy link

chrisgorgo commented Apr 21, 2018

Typical neuroimaging pipelines work from images. Hence it would be interesting to turn the extracted signals into images.

@GaelVaroquaux GaelVaroquaux added the question Further information is requested label Apr 22, 2018
@GaelVaroquaux
Copy link
Collaborator

GaelVaroquaux commented Apr 28, 2018

OK, I wrote a script to do that. The basic idea is simple, it's just a question of inverting the maskers used in signal extraction code (https://github.com/ramp-kits/autism/blob/master/preprocessing/extract_time_series.py). But I went further, and wrote code to blend multiple atlases. The results are cool.

The following assumes that you have run the download script for basc197 and craddock running:

$ python download_data.py craddock_scorr_mean
$ python download_data.py basc197

Here is the coded:

from problem import get_train_data
data, labels = get_train_data()

from nilearn import input_data, datasets
import pandas

#######################################################################
# Reconstruct an image for BASC 197

# Grab the data for the first subject:
basc_series = pandas.read_csv(data['fmri_basc197'].iloc[0])

basc_atlas = datasets.fetch_atlas_basc_multiscale_2015()

basc_masker = input_data.NiftiLabelsMasker(labels_img=basc_atlas.scale197)
# We need to fit the masker to set its voxel size and mask
basc_masker.fit(basc_atlas.scale197)

imgs_basc = basc_masker.inverse_transform(basc_series)

# Let inspect the first image
from nilearn import plotting, image
plotting.plot_epi(image.index_img(imgs_basc, 0),
                  title='From BASC197 atlas',
                  cut_coords=(1, -71, 31))



#######################################################################
# Reconstruct an image for Craddock

# Grab the data for the first subject:
craddock_series = pandas.read_csv(data['fmri_craddock_scorr_mean'].iloc[0])

craddock_atlas = datasets.fetch_atlas_craddock_2012()

craddock_masker = input_data.NiftiLabelsMasker(
        labels_img=image.index_img(craddock_atlas.scorr_mean, 25))
# We need to fit the masker to set its voxel size and mask
craddock_masker.fit(craddock_atlas.scorr_mean)

imgs_craddock = craddock_masker.inverse_transform(craddock_series)

# Let inspect the first image
plotting.plot_epi(image.index_img(imgs_craddock, 0),
                  title='From Craddock atlas',
                  cut_coords=(1, -71, 31))

#######################################################################
# Blend the two

# The challenge here is that the coverage of these two atlases are
# different. We will rely on numpy's nanmean for that

# You may ponder a bit on the expression below... It's not that hard
imgs_blended = image.math_img(
    """np.nanmean([
        np.where(basc_labels[..., np.newaxis] == 0,
                 np.nan, imgs_basc),
        np.where(craddock_labels[..., np.newaxis] == 0,
                 np.nan, imgs_craddock),
    ], axis=0)""",
    basc_labels=basc_masker.labels_img_,
    imgs_basc=imgs_basc,
    craddock_labels=image.resample_to_img(craddock_masker.labels_img_,
                        imgs_basc, interpolation='nearest'),
    imgs_craddock=image.resample_to_img(imgs_craddock, imgs_basc),
    )

plotting.plot_epi(image.index_img(imgs_blended, 0),
                  title='Blended',
                  cut_coords=(1, -71, 31))

basc_reconstruction
craddock_reconstruction
blended_reconstruction

Keep in mind that the servers used to run the submissions have limited memory, and that if you do this to 2000 subjects, you will create a lot of data, so the danger is to bring the servers down, which means that your submission won't run. Also, don't use this to do crazy things, as you'll blow your computing quotas.

@GreatNewHope
Copy link

GreatNewHope commented Jun 1, 2018

I am currently trying to do the same but with the msdl atlas.
This is what I am doing (just a copy-paste of your code)

# Reconstruct an image for MSDL

# Grab the data for the first subject:
msdl_series = pandas.read_csv(data['fmri_msdl'].iloc[0])

msdl_atlas = datasets.fetch_atlas_msdl()

msdl_masker = input_data.NiftiLabelsMasker(labels_img=msdl_atlas.maps)
# We need to fit the masker to set its voxel size and mask

msdl_masker.fit(msdl_atlas.maps)

imgs_msdl = msdl_masker.inverse_transform(msdl_series)

But it results in a error

*** nilearn._utils.exceptions.DimensionError: Input data has incompatible dimensionality:
Expected dimension is 3D and you provided a 4D image. See http://nilearn.github.io/manipulating_images/input_output.html.

And that is weird because I thought we were always dealing with 4D images independently of the atlas. How do we select the first 3D image of the msdl atlas? And what time correspond the images we see for Craddock and BASC?

Thank you!

@GaelVaroquaux
Copy link
Collaborator

GaelVaroquaux commented Jun 1, 2018 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
answered The question has been answered question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants