Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

montage / digmontage / transforms documentation #6605

Open
jasmainak opened this issue Jul 25, 2019 · 4 comments
Open

montage / digmontage / transforms documentation #6605

jasmainak opened this issue Jul 25, 2019 · 4 comments
Labels

Comments

@jasmainak
Copy link
Member

I was trying to understand how all the transforms work with @teonbrooks . I don't have time to write a full-fledged documentation but I'll jot down the main points -- it could form the skeleton of a tutorial or added to an existing one.

  • You have 3 coordinate systems: device, head, and MRI
  • The kinds of points you are considering are: hpi (in device space; also called elp when it's in digitizer head space), lpa, rpa, nasion, digitized head shape points
  • Goal is to get two transforms: head-device and head-mri, the former is stored in info and the latter is the so-called -trans.fif file that is found by coregistration
  • Let's consider the first transform head-device -- it's estimated using HPI coils. You know the HPI location in head space. Then during the recording you emit an RF pulse at 330 Hz (?). This can be filtered and then with dipole fit with a spherical head model, you find the location of these points in device space. Now you have the location in both device and head space, so the dev-head transform can be computed. This is typically done in the acquisition device itself and then read into raw.info['dev_head_transform']
  • Now let's consider the second transform: for this you need the lpa, rpa and nasion. It's digitized during the measurement in head space. These are available in raw.info['dig'] and read in using the digitization functions that @massich is working on. Now, you need to mark it in MRI space manually. Once you have the corresponding points, you can coregister them together. Headshape points can be used to further refine the coregistration.
@jasmainak jasmainak added the DOC label Jul 25, 2019
@teonbrooks
Copy link
Member

nicely done. i'm a great teacher 😂

@larsoner larsoner added this to the 0.21 milestone Mar 4, 2020
@larsoner
Copy link
Member

@jasmainak want to work on a tutorial for this for 0.21 in the next week or two? For the MRI coord frame you can now refer to https://github.com/mne-tools/mne-python/blob/master/tutorials/source-modeling/plot_background_freesurfer_mne.py, so really we "just" need a tutorial to document the sensor (MEG, EEG) coordinate frames, and say that the head_mri_t is covered in another related tutorial.

@jasmainak
Copy link
Member Author

Thanks for pinging me @larsoner. Unlikely I'll have the time right now although I'd love to do it.

I need to properly sit down to think what should go into this. Maybe a good project for a 1-2 days of a coding sprint :-)

@larsoner larsoner modified the milestones: 0.21, 0.22 Sep 9, 2020
@larsoner
Copy link
Member

larsoner commented Dec 1, 2020

Removing the milestone for this -- this is another place that could be refactored to have this information:

https://mne.tools/dev/overview/implementation.html?highlight=head%20mri%20device#meg-eeg-and-mri-coordinate-systems

This and the plot_background_freesurfer_mne.py should probably cross-reference each other well, and hopefully not duplicate too much content.

@larsoner larsoner removed this from the 0.22 milestone Dec 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants