Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: Use OpenMEEG for iEEG #1624

Open
maedoc opened this issue Oct 24, 2014 · 29 comments
Open

ENH: Use OpenMEEG for iEEG #1624

maedoc opened this issue Oct 24, 2014 · 29 comments
Assignees

Comments

@maedoc
Copy link
Contributor

maedoc commented Oct 24, 2014

I'd like to use OpenMEEG to generate forward solutions, especially for sEEG (#1623), under the assumption that MNE's forward module doesn't provide the same support for sEEG/internal potential as OpenMEEG (?).

Assuming I have the normal BEM surfaces, and additionally the positions of the sEEG electrodes, I'm able to get s/M/EEG lead fields, more or less by hand using the command line OpenMEEG tools.

Is this already available in MNE? I could find no reference to OM in MNE. If I make a PR for this, which is preferred, a) calling the command line utils b) using the Python module?

@maedoc maedoc changed the title Q Using OpenMEEG Q: Using OpenMEEG Oct 24, 2014
@agramfort
Copy link
Member

There is no integration of OpenMEEG into MNE yet but this is planned for 2015. There will be some help coming soon. But any progress you make already will make things go faster.

You should do it via python bindings.

thanks @maedoc !

@agramfort
Copy link
Member

ping @souravamishra

@maedoc
Copy link
Contributor Author

maedoc commented Oct 27, 2014

I've had some trouble constructing OpenMEEG objects directly with the SWIG based API (sometimes segfaults as result), and the examples all seem to use the file-based constructors. Is this normal?

@agramfort
Copy link
Member

I've had some trouble constructing OpenMEEG objects directly with the SWIG based API (sometimes segfaults as result),

oups :-/

and the examples all seem to use the file-based constructors. Is this normal?

have a look at:

https://raw.githubusercontent.com/openmeeg/openmeeg_sample_data/master/compute_leadfields.py

@maedoc
Copy link
Contributor Author

maedoc commented Oct 28, 2014

have a look at:

This is what I meant, the objects like HeadModel are constructed by passing the name of a file, where I would have expected a Python API allowing it to be constructed explicity via __init__ & regular methods.

OTOH this is probably the most well tested way of using OpenMEEG no?

python bindings

I won't bother about this a third time, but won't this introduce a binary dependency among platform, python & numpy versions? Could be a headache for distribution, compared to just "make sure OpenMEEG is in $PATH".

I'm rounding out a first version, assuming the 3 layer BEM, and will send a PR to know what is preferred for API etc.

@maedoc
Copy link
Contributor Author

maedoc commented Oct 28, 2014

Also capturing stdout of subprocess is easier than a shared library with stuff like cout << "error". But I don't know how OpenMEEG does that.

@agramfort
Copy link
Member

point taken although it's a bit disappointing to have a file based approach.
If you can have everything in memory it's nicer and more efficient.

I did not think about the std::cout usage... no experience with that.

@souravamishra should soon be able to help improv the binding.

don't hesitate to type the few lines you would like to be able to write
to do the job.

@larsoner larsoner self-assigned this Oct 3, 2016
@larsoner
Copy link
Member

larsoner commented Oct 3, 2016

@maedoc let me know if you have some data I could treat with

@larsoner
Copy link
Member

larsoner commented Oct 3, 2016

Test, not treat (silly mobile)

@maedoc
Copy link
Contributor Author

maedoc commented Oct 4, 2016

@Eric89GXL there are several epilepsy s/M/EEG data sets but would require clinical release. Either I could run tests myself or ask the right person about how to pass the data along. What do you have in mind?

@larsoner
Copy link
Member

larsoner commented Oct 4, 2016

Ultimately we need to have some unit tests, and ideally some example showing the functionality. Maybe we can use the sample dataset structural to fake it so we don't have to have real data -- and then you could validate it actually works for your restricted datasets. But at least I'd need some channel definitions. Not sure how the locations are stored in your data or how they will interface with OpenMEEG yet, so we'll have to figure it out.

@maedoc
Copy link
Contributor Author

maedoc commented Oct 4, 2016

If simulated data would work, then I'll be able to put it together quickly.

@maedoc
Copy link
Contributor Author

maedoc commented Oct 12, 2016

@agramfort wrt binding OpenMEEG I think a Cython module which provides a minimal in-process API would be nicer than the SWIG approach which assumes head model & surfaces are stored in files. It would statically link OpenMEEG libs and be packed into per platform wheels. wdyt?

@agramfort
Copy link
Member

agramfort commented Oct 12, 2016 via email

@maedoc
Copy link
Contributor Author

maedoc commented Mar 1, 2017

I'm at a point where I need this to work, and I can implement what I mentioned above over the next few weeks, but I think it's worth some discussion of use cases. My current workflow is something like

  • format data from FreeSurfer recon
    • generate interfaces with mne_watershed_bem
    • reduce vtx/face count mris_downsample -d 0.1
    • load each BEM surface, lh.pial and rh.pial w/ nibabel, write out brainvisa format
    • generate subcortical grid sources subcortical.dip from aseg.mgz
  • align sensors with T1 space, move surfaces & subcort grid sources to sensor coord sys
  • make volume conduction model
    • om_assemble head model
    • om_minverser head matrix
  • make source models
    • om_assemble -DipSourceMat head_model.{geom,cond} subcortical.{dip,dsm}
    • om_assemble -SurfSourceMat head_model.{geom,cond} cortical-$hemi.{tri,ssm}
  • make sensor models
    • om_assemble -h2em head_model.{geom,cond} EEG.sensors EEG.h2em
    • om_assemble -h2mm head_model.{geom,cond} MEG.sensors MEG.h2mm
    • om_assemble -h2ipm head_model.{geom,cond} $seeg_sensor_file seeg.h2ipm
  • make seeg gain matrices
    • om_assemble -ds2ipm head_model.{geom,cond} cortical-$hemi.dip $seeg_sensor_file seeg-$hemi.ds2ipm
    • om_gain -InternalPotential head-inv.mat cortical-$hemi.ssm seeg.h2ipm seeg-$hemi.ds2ipm seeg-$hemi.gain.mat
  • make meg/eeg gain matrices
    • om_gain -$modality head-inv.mat $source $sensor_model.* $sensor_model.gain.mat

This all assumes that FreeSurfer recon is done and mne_watershed_bem is available.
Is that reasonable in general for MNE? A more minimal assumption is that sensor aligned
BEM surfaces can be provided.

At least, I can start on wrapping some of the C++ APIs which would be required to replace the om_* calls.

wdyt?

@agramfort
Copy link
Member

thx @maedoc for the update.

the first section on format data from FreeSurfer recon should ideally be done only with mne tools and we should not rely on brainvisa formats by passing numpy arrays to openmeeg directly in memory in python.

cc @papadop @eolivi @mclerc

@maedoc
Copy link
Contributor Author

maedoc commented Mar 9, 2017

we should not rely on brainvisa formats by passing numpy arrays

I don't want to replicate that here, it's just what I was doing in the past. Ideally we'd use nibabel to load the formats, etc.

@agramfort
Copy link
Member

agramfort commented Mar 9, 2017 via email

@maedoc
Copy link
Contributor Author

maedoc commented Nov 16, 2017

I've not done anything here since we're focused on the sEEG currently with lots of subcortical structures, where we don't have source orientation info, so we use a simple 1/r^2 rule 🙉 .

@mclerc
Copy link

mclerc commented Nov 16, 2017 via email

@maedoc
Copy link
Contributor Author

maedoc commented Jan 28, 2019

@mclerc @agramfort friendly question: have there been any updates on MNE-Python <-> OpenMEEG?

@agramfort
Copy link
Member

agramfort commented Jan 28, 2019 via email

@larsoner larsoner added this to To do in Sprint Paris 2019 Mar 26, 2019
@larsoner larsoner added the Core label Mar 27, 2019
@larsoner
Copy link
Member

FYI there is some progress being pushed here in openmeeg/openmeeg#443 and conda-forge/openmeeg-feedstock#18

@larsoner
Copy link
Member

... the TL;DR of those is basically that support will hopefully be added to MNE-Python in the next couple of months, starting with:

  1. Just macOS + Linux via conda-forge
  2. Add support for Windows via conda-forge
  3. Add support for PyPi

As part of (1) we'll make a PR to MNE-Python to hopefully wrap nicely to OpenMEEG such that you can pass it standard MNE-Python objects and get back an instance of Forward just like with make_forward_solution (and maybe even via that function, we'll see!).

@agramfort
Copy link
Member

it's now possible thanks the hard work of @larsoner to install openmeeg with conda or pip on all 3 platforms 🎉

it means that the https://github.com/mne-tools/mne-openmeeg project can be resurrected to offer alternative forward models for MEG/EEG but importantly iEEG.

@maedoc do you have a bit of bandwidth to help here? Otherwise I'll restart the ball after my summer break.

@mclerc
Copy link

mclerc commented Aug 2, 2022 via email

@maedoc
Copy link
Contributor Author

maedoc commented Aug 2, 2022

iEEG is an important use case for our team. I could look at this after the summer break also.

@ualsbombe
Copy link
Contributor

Thanks for this great effort - excited about it
I however get this error, when I follow the instructions on https://github.com/mne-tools/mne-openmeeg:
image

@agramfort
Copy link
Member

agramfort commented Aug 6, 2022 via email

@larsoner larsoner removed their assignment Aug 23, 2022
@larsoner larsoner changed the title Q: Using OpenMEEG ENH: Use OpenMEEG for iEEG Oct 1, 2023
@larsoner larsoner removed the Core label Oct 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Development

No branches or pull requests

5 participants