New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Cortex surface projections #1516
[MRG] Cortex surface projections #1516
Conversation
Thx ! I have a general question: overall the difference with pycortex is not that large -- yet we lose a few details, like in the pSTS region. However, you seem to suggest that your strategy is much faster: have you checked ? Can you clarify the difference ? Why didn't you use pycortex or very similar code ? |
nilearn/plotting/surf_plotting.py
Outdated
constant_values=np.nan)[sample_slices] | ||
texture = np.nanmean(samples, axis=2) | ||
if not(np.isfinite(texture).all()): | ||
raise IndexError() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps consider adding a more verbose error message
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. This will not help the user.
nilearn/plotting/surf_plotting.py
Outdated
image: Nifti image, 3d or 4d. | ||
mesh_nodes: array-like,shape n_nodes * 3 | ||
|
||
image : Nifti image, 3d or 4d. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this does not yet seem to make the standard mention of nii-like image compatability
@@ -10,11 +10,17 @@ | |||
import numpy as np | |||
import nibabel as nb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
several imports in here appear to not follow the "basic->more specific/elaborate" convention...
@@ -0,0 +1,17 @@ | |||
from matplotlib import pyplot as plt |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need a docstring with a title, for the gallery. Sphinx-gallery will not accept to build a gallery item from this.
FYI: CircleCI is now fixed and full details about the FIX can be found in this thread |
|
||
def _piecewise_const_image(): | ||
w, h = 7, 10 | ||
image = np.random.uniform(0, 1, size=(w, h)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to avoid uncontroled RNGs in the codebase. The right way of doing this is to create a like RNG:
rng = np.random.RandomState(0) image = rng.uniform(...)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I hadn't realized that this was an example, and not a test.
I would simply seed the global RNG here: it's simpler and the side effect is less important.
as a result, surface projection is slower, even when using interpolation='nearest'
Hello @bthirion , sorry for the late answer. I think that the extra sharpness you see in the neurovault snapshot is mostly Fig. 4 in the pycortex paper (Gao, J. S., Huth, A. G., Lescroart, M. D., & https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4586273/ This mapping of pixels into the image is done by WebGL, a javascript wrapper I think we don't want this because these dependencies are heavy (and obviously There is a possibility to get a value for each mesh vertex with pycortex, but as I have added sampling along a normal to the cortex to what is implemented in for now, the PR has the two interpolation methods offered in scipy: nearest and There are still some extra details in the pycortex images. Some possible explainations:
Finally, it is a bit hard to compare computation times because of pycortex's pycortex:nearest: 288 ms nilearn:nearest (line): 79 ms If the objective is just to plot one image, all of these are fine (except maybe I think that all images look kind of similar and the projection doesn't have a some examples for two images (neurovault ids 32015 and 36044): image 32015:pycortex nearest: nilearn ball, 5mm: nilearn line, linear: nilearn line, nearest: image 36044:pycortex lanczos: |
Beautiful and pedagogical contribution ! |
""" | ||
Illustration of the volume to surface sampling scheme | ||
===================================================== | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a bit more here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall the structure of this second example does not follow the Nilearn conventions for examples (present them as a script rather than functions). Can this be reorganized a little bit ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes if we keep it this example should be reworked. Originally, I had only thrown
it in there to illustrate the sampling method for reviewers by producing the 2d
plot at the top of the PR, and was planning to remove it. However @GaelVaroquaux
suggested that it might be interesting to users as well. The only problem is
that in that case we need a coord_transform that can handle 2d images, which is
why I'm not using nilearn.image.coord_transform for now. It's a one-line
function though.
If we keep it I will also add an illustration for the line sampling.
nilearn/plotting/surf_plotting.py
Outdated
# could be replaced by nilearn.image.resampling.coord_transform, but | ||
# it only works for 3d images and 2d is useful for visu and debugging. | ||
def _transform_coord(coords, affine): | ||
return affine.dot(np.vstack([coords.T, np.ones(coords.shape[0])]))[:-1].T |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there's a function for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, image.resampling.coord_transform. But it only handles 3d data, so if we
want to keep the example which illustrates the sampling methods in 2d we need
this (see comment). or is there another function for coord transforms in
nilearn?
nilearn/plotting/surf_plotting.py
Outdated
|
||
|
||
def _face_normals(mesh): | ||
vertices, faces = load_surf_mesh(mesh) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do you unsure that these are outer normals. Is is from the mesh triangle encoding conventions (should be made explicit then) ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Usually when faces in a triangular mesh are described by their ordered vertices,
the normal given by the right-hand rule points "outwards". I think that this
convention is respected by cortical meshes because that is what matplotlib
expects, and surf_plotting.plot_surf passes the faces to matplotlib without
modifying them (and it works fine). Can anyone confirm? I will make this explicit.
nilearn/plotting/surf_plotting.py
Outdated
def _ball_sample_locations(mesh, affine, ball_radius=3, n_points=100): | ||
"""Get n_points regularly spaced inside a ball.""" | ||
vertices, faces = mesh | ||
if affine is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find this a bit surprising for a non-public functions
nilearn/plotting/surf_plotting.py
Outdated
affine = np.eye(vertices.shape[1] + 1) | ||
if normals is None: | ||
normals = _vertex_normals(mesh) | ||
offsets = np.linspace(-segment_half_width, segment_half_width, n_points) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would take only the outer normal; Why do you center the interval ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The problem is that I was not sure if we would always get a pial surface, or
sometimes a white matter surface, or something in between. If it's always pial
we can shift the interval inwards. We could also expose the shift as a
parameter, or ask the user to specify what kind of surface it is. Finally, I
think we should still keep at least a small portion of it on the other side to
be more robust to small registration problems (this is what caret does).
nilearn/plotting/surf_plotting.py
Outdated
|
||
|
||
def _line_sample_locations( | ||
mesh, affine, segment_half_width=3., n_points=100, normals=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
n_points=10 probably suffices as a default ?
@@ -388,3 +395,36 @@ def test_plot_surf_roi_error(): | |||
'Invalid input for roi_map', | |||
plot_surf_roi, mesh, | |||
roi_map={'roi1': roi1, 'roi2': roi2}) | |||
|
|||
|
|||
def _ball_sampling(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
test_ball_sampling()
proj_2 = surf_plotting.niimg_to_surf_data( | ||
mni_rot, fsaverage, kind=kind, interpolation='linear') | ||
assert_true((sklearn.preprocessing.normalize([proj_1])[0] * | ||
sklearn.preprocessing.normalize([proj_2])[0]).sum() > .998) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add comment to explain what you're testing.
Thx. I should say, I'm convinced. I only remain a bit surprised that the line/
linear approach doe not really work better than the ball, but this is not a
huge difference.
THe ball is an approximation of a Gaussian sampling by considering it as
a density and choosing an small set of optimal points to approximate it.
|
The one we use now is a uniform distribution inside the ball. I can replace it with Gaussian if we want. |
This looks like a great thing to have so easily and transparently accessible to the user. After some email discussion on the topic, here is a proposal to make it even more clear what is being done, with potentially huge benefits to seasoned users and perfect transparency for all of those who don't care: Many of these transformations (here volume to surface projections, but think also volume to volume transformations, ROI extractions, ...) are linear operations on the data vectors. Many of them can be represented as very sparse matrices (e.g. one vertex in this specific surface projection case will not consider the values of more than say 10 voxels). The idea would be to make this linear transformation explicit and hand it to experienced users if they request it. Ideally this would be done by representing the transformation as a scipy sparse matrix, or, at least as a scipy linear operator (by providing an adjoint/rmatvec transformation in addition to the forward direction). This would aid in
Thoughts? |
+ let the sampling implementation choose the default number of sample points
nilearn/surface/surface.py
Outdated
# No interpolation on a regular grid in scipy before 0.14 | ||
# since computing triangulation is very slow, if scipy.__version__ < 0.14 | ||
# we fall back to nearest neighbour interpolation. | ||
if (LooseVersion(scipy.__version__) < LooseVersion('0.14') and |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: remove this.
nilearn/surface/surface.py
Outdated
voxel, and 'linear' performs trilinear interpolation of neighbouring | ||
voxels. 'linear' may give better results - for example, the projected | ||
values are more stable when resampling the 3d image or applying affine | ||
transformations to it. 'nearest' is slightly faster for one image, and much |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should repeat a sentence on the speed of linear vs nearest interpolation above, in the documentation of the corresponding argument.
To reply to the different points by @jeromedockes :
|
nilearn/surface/surface.py
Outdated
transformations to it. 'nearest' is slightly faster for one image, and much | ||
faster for many images (can be an order of magnitude faster for 100 images, | ||
i.e. a 4d niimg with last dimension=100). Note that linear interpolation is | ||
only available if your scipy version is 0.14 or more recent. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can remove this comment: we will now support only scipy >= 0.14
|
||
|
||
def _masked_indices(sample_locations, img_shape, mask=None): | ||
"""Get the indices of sample points which should be ignored. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the impression that you are doing a lot of "not" operations when you are building this mask and using it. I wonder if it wouldn't be beneficial to invert the function and return the selected indices.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll try to change it and see if it makes a difference. I did it like this because it was easier to find a good name for the function. I really don't know if it makes a difference on the total time vol_to_surf takes to return (my guess is no)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have written a patch inverting this function to _kept_indices. playing around with %timeit I see no difference, which makes sense: it takes 1 microsecond to do a logical not on an array of the size of fsaverage on my machine; around 6 microseconds for an array ten times bigger. Since we are dealing with dozens of milliseconds in vol_to_surf, this is very small. should I push this commit anyway?
nilearn/surface/surface.py
Outdated
interpolated values are averaged to produce the value associated to this | ||
particular mesh vertex. | ||
|
||
""" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like here a warning that says that the function is experimental and that minor details such in the interpolation could change.
The reason that I would like this, is that we are considering merging this just before a release, and hence we have little insight on the best choices for hyperparameters. I want to be able to change those hyperparameters.
We'll remove the warning in the near future.
I've just merged #1564 . You can rebase on master to remove support for scipy 0.14 |
I guess it was accidentally closed. Reopening again. |
I have written a patch inverting this function to _kept_indices. playing around
with %timeit I see no difference, which makes sense: it takes 1 microsecond to
do a logical not on an array of the size of fsaverage on my machine; around 6
microseconds for an array ten times bigger. Since we are dealing with dozens of
milliseconds in vol_to_surf, this is very small. should I push this commit
anyway?
I didn't have time in mind, but more reabability: we are moving around an
object that we are inverting. That is a bit surprising.
|
ah ok, in this case I change it |
How is this for a compromise:
I just find it easier to understand what this is when it is called 'masked' something. |
I am good with that solution.
|
IIUC all Freesurfer subjects have an affine transformation file that can applied to get data into MNI space. This is what we plan to use in MNE to transform to use https://github.com/mne-tools/mne-python/pull/4496/files#diff-c38906291c7d4fa5e1df2d494b9c27d2R129 It would be great if the If you want to extend it to subjects other than |
@larsoner : I don't fully understand the scope of the problem, as I don't use freesurfer (it's just to heavy and expensive for my needs). However, I think that if you're able to use plot_glass_brain, you'll be able to use this functionality. It should be a fairly general functionality: it just need a mesh, that can be given, in the same space as the image. |
@larsoner it should work fine for any mesh and image if they're both in the same space - in particular if they're both in MNI space. My comment was simply that I would have liked to try and test it with a few other meshes, you can ignore it |
OK, tests ran. I think that this is good to go. Merging! Thanks a lot, @jeromedockes : this is big! |
I know nothing about eeg or meg so this sounds like a cool learning opportunity if it can be useful to nilearn users. happy to work on it after today's release (as you said it would be a different PR). In the meanwhile, anything that is an affine transformation away from the image space is fine for the current functionality |
Great! many thanks to everyone involved in the reviews and discussion; it has been fun and very interesting for me |
Hello @GaelVaroquaux , @mrahim , @agramfort, @juhuntenburg and others,
this is a PR about surface plotting.
nilearn has some awesome functions to plot surface data in
nilearn.plotting.surf_plotting. However, it doesn't offer a conversion from
volumetric to surface data.
It would be great to add a function to sample or project volumetric data on the
nodes of a cortical mesh; this would allow users to look at surface plots of
their 3d images (e.g. statistical maps).
In this PR we will try to add this to nilearn.
Most tools which offer this functionality (e.g. caret, freesurfer, pycortex)
usually propose several projection and sampling strategies, offering different
quality / speed tradeoffs. However, it seems to me that naive strategies are not
so far behind more elaborate ones - see for example [Operto, Grégory, et al.
"Projection of fMRI data onto the cortical surface using anatomically-informed
convolution kernels." Neuroimage 39.1 (2008): 127-135].
For plotting and visualisation, the results of a simple strategy are probably
accurate enough for most users.
I therefore suggest to start by including a very simple and fast projection
scheme, and we can add more elaborate ones later if we want.
I'm just getting started but I think we can already start a discussion.
The proposed strategy is simply to draw a sample from a 3mm sphere around each
mesh node, and average the measures.
The image below illustrates that strategy: each red circle is a mesh node.
Samples are drawn from the blue crosses that are attached to it, and are inside
the image, and then averaged to compute the color inside the circle.
(This image is produced by the show_sampling.py example, which is only there to clarify
the strategy implemented in this PR and will be removed).
Here is an example surface plot for a brainpedia image (id 32015 on Neurovault,
https://neurovault.org/media/images/1952/task007_face_vs_baseline_pycortex/index.html),
produced by brainpedia_surface.py:
And here is the plot produced by pycortex for the same image, as shown on
Neurovault:
Note about performance: in order to choose the positions of the samples to draw
from a unit ball, for now, we cluster points drawn from a uniform distribution
on the ball and keep the centroids (we can think of something better). This
takes a few seconds and the results are cached with joblib for the time being,
but since it only needs to be done once, when we have decided how many samples
we want, the positions will be hardcoded once and for all (no computing, no
caching). with 100 samples per ball, projecting a stat map of the full brain
with 2mm voxels on an fsaverage hemisphere mesh takes around 60 ms.