-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Read haemoglobin data from SNIRF #9929
Conversation
@sstucker @dboas does Homer export SNIRF files in 99999 hbo/hbr format? If so, is there a test file publicly available I can test our implementation against to ensure compatibility with Homer? If Homer does export SNIRF in HbO/HbR but no file is available, I can create one with Homer myself, but has the SNIRF export code stabilised? Or should I wait till the validator etc is completed? |
Ok, the coordinate frame for Kernel data is 'mri'. I can not determine the coordinate frame correctly from the SNIRF file alone. So the user needs to load it with the mne-python/mne/io/snirf/_snirf.py Lines 271 to 272 in 47c0112
Using the measurement file provided by @JohnGriffiths everything seems to be looking good so far... kernel_hb = '/Volumes/MassStorage2TB/rluke/Repositories/' \
'kernel-flow-tools/pitch_sub010_ft_ses01_1017-1706_kp-snf-hbm.snirf'
from mne.io import read_raw_snirf
import os.path as op
import mne
raw = read_raw_snirf(kernel_hb, optode_frame='mri', preload=True)
# Plot sources and detectors on head
subjects_dir = op.join(mne.datasets.sample.data_path(), 'subjects')
mne.datasets.fetch_fsaverage(subjects_dir=subjects_dir)
brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, alpha=0.5, cortex='low_contrast', background="white")
brain.add_head()
brain.add_sensors(raw.info, trans='fsaverage', fnirs=['sources', 'detectors'])
brain.enable_depth_peeling()
brain.show_view(azimuth=90, elevation=90, distance=500)
# Plot channels only
brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, alpha=0.5, cortex='low_contrast', background="white")
brain.add_sensors(raw.info, trans='fsaverage', fnirs=['channels'])
brain.enable_depth_peeling()
brain.show_view(azimuth=90, elevation=90, distance=500)
# Plot pairs only
brain = mne.viz.Brain('fsaverage', subjects_dir=subjects_dir, alpha=0.5, cortex='low_contrast', background="white")
brain.add_sensors(raw.copy().pick('hbo').info, trans='fsaverage', fnirs=['pairs'])
brain.enable_depth_peeling()
brain.show_view(azimuth=90, elevation=90, distance=500) |
Ok, I have done some more playing around to ensure that downstream processing can handle so many channels. One thing I've noticed is that many channels have Nans in them. So this might be a useful snippet for end-users. raw.info['bads'] = np.isnan(raw.get_data()).any(axis=1)
# and/or
raw.drop_channels(list(compress(raw.ch_names, raw.info['bads']))) After running this on Johns data it still leaves almost 1000 source detector pairs. |
And I have run my standard GLM analysis on the channel level data then used mne-nirs Red dots are sources. Black dots are detectors. I don't know what is in the data or if its meaningful, but this demonstrates that the data is moving through the end-to-end processing without errors. I think as the next steps we get the small test file and then let @JohnGriffiths see if the branch works for him. Then should be good for review. |
NIRx ICBM-152 MNI mri | ||
Kernel ICBM 2009b mri |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have just taken the model names exactly as described by the vendors. I guess these might actually be the same thing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know the bottom line but I would be very concerned for my sanity if those two were meaningfully different MNI spaces.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
haha agreed, I am pretty sure they are practically the same. Maybe I will just change them both to ICBM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
they may be slightly different: there are several versions of the ICBM-152 MNI atlas (2009a, 2009b, 2009c; and within each, different flavors, symmetric vs. asymmetric). 152 refers to the number of individual subjects' scans that went into creating the atlas.
The 2009b is the highest resolution atlas (0.5mm isotropic) and thus the one we chose to run tissue segmentation and make meshes.
Calling them both ICBM-152 MNI
is likely enough information.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes there are different template images, but the space = the co-ordinate frame, ie the target of the voxel-to-world transform in the header's affine matrix, should I think be same.
( Whether MRI template images in standard space used for registration should be called 'atlases' is debatable, but it's true that is a common terminology )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
agreed on both points.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you both for the information. I appreciate you sharing your knowledge here, I am learning lots as we go along here :)
@rob-luke Thanks so much for doing this, everything is looking great! |
Thanks @Zahra-M-Aghajan for being so open about this. Let's give the SNIRF developers another week to chime in. If they don't speak up in a week, it would be great to add the field mne-python/mne/io/constants.py Lines 241 to 253 in b97da43
|
I don’t have specific comments on this as I am not expert in these coordinate transformations. MRI people are best at this and we should follow their standards.
From: Robert Luke ***@***.***>
Date: Tuesday, November 2, 2021 at 10:12 PM
To: mne-tools/mne-python ***@***.***>
Cc: Boas, David ***@***.***>, Mention ***@***.***>
Subject: Re: [mne-tools/mne-python] Read haemoglobin data from SNIRF (PR #9929)
We can easily add the extra field to metaDataTags.. should I wait till this is standardized (re your open issue on SNIRF specs)? I see that you are using the field MNE_coordFrame in the metaDataTags, which will be set to 'mri' in our case for now..
Thanks @Zahra-M-Aghajan<https://github.com/Zahra-M-Aghajan> for being so open about this. Let's give the SNIRF developers another week to chime in. If they don't speak up in a week, it would be great to add the field MNE_coordFrame until a solution appears upstream. In this case, you would store the number 5 in the field, you can see the coding of coordinate frames in FIFF format below. This isn't essential, but I think would be easier for end users, rather than needing to specify the coord frame manually each time.
https://github.com/mne-tools/mne-python/blob/b97da4357f9ffb584eef5949c958187b3a7cc566/mne/io/constants.py#L241-L253
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#9929 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AHFCP5GZP33DSAEQW2STXR3UKCK7LANCNFSM5HA4NNEA>.
Triage notifications on the go with GitHub Mobile for iOS<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675> or Android<https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub>.
|
Now using released testing data from mne-tools/mne-testing-data#90 |
Okay, ping for review once tests are updated and everything is green |
All green! @larsoner @agramfort could you please review |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Other than a few minor things, LGTM
Thanks @larsoner |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@larsoner merge if happy
Thanks @rob-luke ! |
Argh just noticed there was no |
Thanks @larsoner I'll make a small PR when at work today for the missing change log. Sorry |
What does this implement/fix?
The SNIRF file format (https://github.com/fNIRS/snirf) supports a great variety of fNIRS data types. Currently MNE only supports reading continuous wave amplitude data from SNIRF files (referred to in SNIRF as type 1).
MNE-Python currently supports reading/processing continuous wave and frequency domain fNIRS signals (time-domain coming in the next few weeks hopefully!). The raw signals are then converted to haemoglobin concentration for analysis. This PR adds support for reading the processed oxy and deoxyhaemoglobin signals (HbO & HbR) from SNIRF files. (referred to in SNIRF as type 99999 processed with dataTypeLabel HbO and HbR).
Some devices export directly in HbO/HbR so this enables support for those devices. And also enables sharing of data that is preprocessed in other software.
Additional information
This begun in the #9661 PR. However, that PR was much larger and also wanted to add support for a completely new type of fNIRS (time domain fNIRS). This breaks out a more manageable sized PR. Then, once we have a more complete understanding we can tackle adding the TD-fNIRS types etc.
Currently no changelog as this won't make it in to 0.24, once 24 is released I will rebase and add to the new list and add @Zahra-M-Aghajan as co-author as much of the actual code is from her original PR.
FYI: @Zahra-M-Aghajan @julien-dubois-k