Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tools for multiple modality data #941

Open
matthew-brett opened this issue Jul 30, 2020 · 3 comments
Open

Tools for multiple modality data #941

matthew-brett opened this issue Jul 30, 2020 · 3 comments

Comments

@matthew-brett
Copy link
Member

The basic experimental use case is that we have several simultaneous imaging measures, eg fmri eeg and breathing.

The first concern is to have a common container for neuroimaging data from different modalities.

The second one is to conveniently handle temporal synchronization between measures and then interpolation on the same temporal grid.

(Thomas Vincent)

@matthew-brett
Copy link
Member Author

Has anyone checked MNE for functionality like this?

Can anyone give some detail on when one would resampling to the same grid? When would this make sense for modalities like FMRI and EEG, where the EEG is thousands of times better resolved in time?

@agramfort
Copy link
Contributor

@matthew-brett MNE hands multivariate signals with regular and common sampling. Our internal data object is a numpy array

@thomas-vincent
Copy link

When dealing with very different temporal resolution, resampling to the same grid is indeed not wanted but they should be at least aligned. A common case it to temporally locate an event from one modality and get aligned values in the other modality. It is quite trivial to handle at the low-level with custom numpy array indexing manipulation but maybe there could be more convenient higher-level tools.

For fMRI and NIRS, it makes more sense to resample on the same grid.

The spatial aspect is also important. I think the location of a measure as well as its spatial sensitivity should be better encoded. For instance with MRI, a 3D grid is assumed and the voxel is an implicit sensor locus with a sharp cube-shaped sensitivity volume. For NIRS and EEG, the sensitivity volume is larger and more diffuse. I think there may be some ground to model things in a common way. This would greatly ease multi-modal visualization I think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants