Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Read haemoglobin data from SNIRF #9929

Merged
merged 11 commits into from
Nov 22, 2021
Merged

Read haemoglobin data from SNIRF #9929

merged 11 commits into from
Nov 22, 2021

Conversation

rob-luke
Copy link
Member

@rob-luke rob-luke commented Oct 30, 2021

What does this implement/fix?

The SNIRF file format (https://github.com/fNIRS/snirf) supports a great variety of fNIRS data types. Currently MNE only supports reading continuous wave amplitude data from SNIRF files (referred to in SNIRF as type 1).

MNE-Python currently supports reading/processing continuous wave and frequency domain fNIRS signals (time-domain coming in the next few weeks hopefully!). The raw signals are then converted to haemoglobin concentration for analysis. This PR adds support for reading the processed oxy and deoxyhaemoglobin signals (HbO & HbR) from SNIRF files. (referred to in SNIRF as type 99999 processed with dataTypeLabel HbO and HbR).

Some devices export directly in HbO/HbR so this enables support for those devices. And also enables sharing of data that is preprocessed in other software.

Additional information

This begun in the #9661 PR. However, that PR was much larger and also wanted to add support for a completely new type of fNIRS (time domain fNIRS). This breaks out a more manageable sized PR. Then, once we have a more complete understanding we can tackle adding the TD-fNIRS types etc.

Currently no changelog as this won't make it in to 0.24, once 24 is released I will rebase and add to the new list and add @Zahra-M-Aghajan as co-author as much of the actual code is from her original PR.

FYI: @Zahra-M-Aghajan @julien-dubois-k

@rob-luke
Copy link
Member Author

rob-luke commented Oct 30, 2021

@sstucker @dboas does Homer export SNIRF files in 99999 hbo/hbr format? If so, is there a test file publicly available I can test our implementation against to ensure compatibility with Homer? If Homer does export SNIRF in HbO/HbR but no file is available, I can create one with Homer myself, but has the SNIRF export code stabilised? Or should I wait till the validator etc is completed?

@rob-luke
Copy link
Member Author

rob-luke commented Oct 31, 2021

Ok, the coordinate frame for Kernel data is 'mri'. Is this correct @Zahra-M-Aghajan @julien-dubois-k ? (I found the answer on the kernel community site)

I can not determine the coordinate frame correctly from the SNIRF file alone. So the user needs to load it with the optode_frame='mri' argument, which isn't a problem, but would be nice to make this automatically detected. @Zahra-M-Aghajan is there any way that we could get this information included within the SNIRF file in the future? I have opened an issue upstream in the SNIRF standard for a new optional field to specify this. fNIRS/snirf#89 Currently MNE uses its own custom field (which is explicitly allowed within SNIRF) to store the coordinate frame, but I would prefer this is included in the actual standard...

elif 'MNE_coordFrame' in dat.get('nirs/metaDataTags/'):
coord_frame = int(dat.get('/nirs/metaDataTags/MNE_coordFrame')

Using the measurement file provided by @JohnGriffiths everything seems to be looking good so far...

kernel_hb = '/Volumes/MassStorage2TB/rluke/Repositories/' \
            'kernel-flow-tools/pitch_sub010_ft_ses01_1017-1706_kp-snf-hbm.snirf'

from mne.io import read_raw_snirf
import os.path as op
import mne

raw = read_raw_snirf(kernel_hb, optode_frame='mri', preload=True)

# Plot sources and detectors on head
subjects_dir = op.join(mne.datasets.sample.data_path(), 'subjects')
mne.datasets.fetch_fsaverage(subjects_dir=subjects_dir)
brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, alpha=0.5, cortex='low_contrast', background="white")
brain.add_head()
brain.add_sensors(raw.info, trans='fsaverage', fnirs=['sources', 'detectors'])
brain.enable_depth_peeling()
brain.show_view(azimuth=90, elevation=90, distance=500)

# Plot channels only
brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, alpha=0.5, cortex='low_contrast', background="white")
brain.add_sensors(raw.info, trans='fsaverage', fnirs=['channels'])
brain.enable_depth_peeling()
brain.show_view(azimuth=90, elevation=90, distance=500)

# Plot pairs only
brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, alpha=0.5, cortex='low_contrast', background="white")
brain.add_sensors(raw.copy().pick('hbo').info, trans='fsaverage', fnirs=['pairs'])
brain.enable_depth_peeling()
brain.show_view(azimuth=90, elevation=90, distance=500)

image

image

image

@rob-luke
Copy link
Member Author

Ok, I have done some more playing around to ensure that downstream processing can handle so many channels. One thing I've noticed is that many channels have Nans in them. So this might be a useful snippet for end-users.

raw.info['bads'] = np.isnan(raw.get_data()).any(axis=1)
# and/or
raw.drop_channels(list(compress(raw.ch_names, raw.info['bads'])))

After running this on Johns data it still leaves almost 1000 source detector pairs.

@rob-luke
Copy link
Member Author

rob-luke commented Oct 31, 2021

And I have run my standard GLM analysis on the channel level data then used mne-nirs surface_projection (which just wraps MNE stc_near_sensors) to get this...

image

Red dots are sources. Black dots are detectors.

I don't know what is in the data or if its meaningful, but this demonstrates that the data is moving through the end-to-end processing without errors.

I think as the next steps we get the small test file and then let @JohnGriffiths see if the branch works for him. Then should be good for review.

Comment on lines +65 to +66
NIRx ICBM-152 MNI mri
Kernel ICBM 2009b mri
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have just taken the model names exactly as described by the vendors. I guess these might actually be the same thing?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know the bottom line but I would be very concerned for my sanity if those two were meaningfully different MNI spaces.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

haha agreed, I am pretty sure they are practically the same. Maybe I will just change them both to ICBM

Copy link

@julien-dubois-k julien-dubois-k Nov 2, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they may be slightly different: there are several versions of the ICBM-152 MNI atlas (2009a, 2009b, 2009c; and within each, different flavors, symmetric vs. asymmetric). 152 refers to the number of individual subjects' scans that went into creating the atlas.

The 2009b is the highest resolution atlas (0.5mm isotropic) and thus the one we chose to run tissue segmentation and make meshes.

Calling them both ICBM-152 MNI is likely enough information.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes there are different template images, but the space = the co-ordinate frame, ie the target of the voxel-to-world transform in the header's affine matrix, should I think be same.

( Whether MRI template images in standard space used for registration should be called 'atlases' is debatable, but it's true that is a common terminology )

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agreed on both points.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you both for the information. I appreciate you sharing your knowledge here, I am learning lots as we go along here :)

@rob-luke rob-luke added this to the 1.0 milestone Oct 31, 2021
@Zahra-M-Aghajan
Copy link

Ok, the coordinate frame for Kernel data is 'mri'. Is this correct @Zahra-M-Aghajan @julien-dubois-k ? (I found the answer on the kernel community site)

I can not determine the coordinate frame correctly from the SNIRF file alone. So the user needs to load it with the optode_frame='mri' argument, which isn't a problem, but would be nice to make this automatically detected. @Zahra-M-Aghajan is there any way that we could get this information included within the SNIRF file in the future? I have opened an issue upstream in the SNIRF standard for a new optional field to specify this. fNIRS/snirf#89 Currently MNE uses its own custom field (which is explicitly allowed within SNIRF) to store the coordinate frame, but I would prefer this is included in the actual standard...

elif 'MNE_coordFrame' in dat.get('nirs/metaDataTags/'):
coord_frame = int(dat.get('/nirs/metaDataTags/MNE_coordFrame')

Using the measurement file provided by @JohnGriffiths everything seems to be looking good so far...

kernel_hb = '/Volumes/MassStorage2TB/rluke/Repositories/' \
            'kernel-flow-tools/pitch_sub010_ft_ses01_1017-1706_kp-snf-hbm.snirf'

from mne.io import read_raw_snirf
import os.path as op
import mne

raw = read_raw_snirf(kernel_hb, optode_frame='mri', preload=True)

# Plot sources and detectors on head
subjects_dir = op.join(mne.datasets.sample.data_path(), 'subjects')
mne.datasets.fetch_fsaverage(subjects_dir=subjects_dir)
brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, alpha=0.5, cortex='low_contrast', background="white")
brain.add_head()
brain.add_sensors(raw.info, trans='fsaverage', fnirs=['sources', 'detectors'])
brain.enable_depth_peeling()
brain.show_view(azimuth=90, elevation=90, distance=500)

# Plot channels only
brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, alpha=0.5, cortex='low_contrast', background="white")
brain.add_sensors(raw.info, trans='fsaverage', fnirs=['channels'])
brain.enable_depth_peeling()
brain.show_view(azimuth=90, elevation=90, distance=500)

# Plot pairs only
brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, alpha=0.5, cortex='low_contrast', background="white")
brain.add_sensors(raw.copy().pick('hbo').info, trans='fsaverage', fnirs=['pairs'])
brain.enable_depth_peeling()
brain.show_view(azimuth=90, elevation=90, distance=500)

image

image

image

@rob-luke Thanks so much for doing this, everything is looking great!
We can easily add the extra field to metaDataTags.. should I wait till this is standardized (re your open issue on SNIRF specs)? I see that you are using the field MNE_coordFrame in the metaDataTags, which will be set to 'mri' in our case for now..

@rob-luke
Copy link
Member Author

rob-luke commented Nov 3, 2021

We can easily add the extra field to metaDataTags.. should I wait till this is standardized (re your open issue on SNIRF specs)? I see that you are using the field MNE_coordFrame in the metaDataTags, which will be set to 'mri' in our case for now..

Thanks @Zahra-M-Aghajan for being so open about this. Let's give the SNIRF developers another week to chime in. If they don't speak up in a week, it would be great to add the field MNE_coordFrame until a solution appears upstream. In this case, you would store the number 5 in the field, you can see the coding of coordinate frames in FIFF format below. This isn't essential, but I think would be easier for end users, rather than needing to specify the coord frame manually each time.

#
# Coordinate frames
#
FIFF.FIFFV_COORD_UNKNOWN = 0
FIFF.FIFFV_COORD_DEVICE = 1
FIFF.FIFFV_COORD_ISOTRAK = 2
FIFF.FIFFV_COORD_HPI = 3
FIFF.FIFFV_COORD_HEAD = 4
FIFF.FIFFV_COORD_MRI = 5
FIFF.FIFFV_COORD_MRI_SLICE = 6
FIFF.FIFFV_COORD_MRI_DISPLAY = 7
FIFF.FIFFV_COORD_DICOM_DEVICE = 8
FIFF.FIFFV_COORD_IMAGING_DEVICE = 9

@dboas
Copy link

dboas commented Nov 3, 2021 via email

@rob-luke
Copy link
Member Author

Now using released testing data from mne-tools/mne-testing-data#90
I will update the tests, then this should be ready for review.

@larsoner
Copy link
Member

Okay, ping for review once tests are updated and everything is green

@rob-luke rob-luke marked this pull request as ready for review November 20, 2021 00:20
@rob-luke
Copy link
Member Author

All green! @larsoner @agramfort could you please review

Copy link
Member

@larsoner larsoner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than a few minor things, LGTM

mne/io/snirf/_snirf.py Outdated Show resolved Hide resolved
mne/io/snirf/_snirf.py Outdated Show resolved Hide resolved
mne/io/snirf/tests/test_snirf.py Outdated Show resolved Hide resolved
mne/io/snirf/tests/test_snirf.py Outdated Show resolved Hide resolved
@rob-luke
Copy link
Member Author

Thanks @larsoner

@rob-luke rob-luke marked this pull request as draft November 20, 2021 04:37
@rob-luke rob-luke marked this pull request as ready for review November 20, 2021 04:40
Copy link
Member

@agramfort agramfort left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@larsoner merge if happy

@larsoner larsoner merged commit f621302 into mne-tools:main Nov 22, 2021
@larsoner
Copy link
Member

Thanks @rob-luke !

@larsoner
Copy link
Member

Argh just noticed there was no latest.inc, could you make a quick PR to add one @rob-luke ? Or if you have plans for some other PR soon, just add it there?

@rob-luke
Copy link
Member Author

Thanks @larsoner

I'll make a small PR when at work today for the missing change log. Sorry

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants