-
Notifications
You must be signed in to change notification settings - Fork 576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[MRG] Cortex surface projections #1516
Changes from 7 commits
b966ac9
07881d0
0f29725
0771374
e15d04b
c0c39d3
398965d
4589a95
c7af4b1
9f84b24
0c3e1d4
c5d83b5
d1d9536
cdcbed1
58bd2f8
33bb932
536cc19
389851d
07d62bd
9fb10f0
d836a6f
5704948
eef5060
85624ab
6da5a92
c234e60
3cf1f35
8ad1071
2e585d5
21a7ff0
bfe4c74
11e8c17
ecd96c3
63533ec
9d16db7
c6e34a1
c49c5c0
25ae312
2a20304
4739d22
6ac2386
1f55f47
e8cc677
55878d6
6f309ad
c50a5c3
77dd10f
c5eea1d
6aa3f12
fe49498
244eab2
6ba9b53
76d4f50
1cbd3b3
754b834
7d85051
f1f3836
fa838cf
83ca7a4
40bbfa4
d3d69c2
d0982ad
2128c77
fcb8954
73c3d25
e0bd9b8
756bd31
6785cda
2c74c84
43bae74
bf904e3
5e5f264
59049a8
ea3f39e
94cf01c
b11d0d1
fca0e78
1f03b77
cec5afd
43e4be2
91b2c17
ed016cc
a70bb83
f313e3d
c0d56d6
82d7670
a00d642
e38ecb9
9f294a8
e717059
079ddcd
25a09f6
9e52d9f
1b2aec1
6c2244f
4cca98a
ed78a66
3376c51
d1a5779
c8c440d
1ad5546
28c26a0
256a8b1
b86bfb9
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,17 @@ | ||
from matplotlib import pyplot as plt | ||
|
||
import nilearn.image | ||
import nilearn.datasets | ||
from nilearn.plotting import surf_plotting | ||
|
||
brainpedia = nilearn.datasets.fetch_neurovault_ids(image_ids=(32015,)) | ||
image = brainpedia.images[0] | ||
|
||
fsaverage = nilearn.datasets.fetch_surf_fsaverage5() | ||
pial_left = surf_plotting.load_surf_mesh(fsaverage.pial_left)[0] | ||
|
||
texture = surf_plotting.niimg_to_surf_data(image, pial_left) | ||
|
||
surf_plotting.plot_surf_stat_map(fsaverage.infl_left, texture, cmap='bwr') | ||
|
||
plt.show() |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,66 @@ | ||
import numpy as np | ||
import matplotlib.patches | ||
from matplotlib import pyplot as plt | ||
from nilearn.plotting import surf_plotting | ||
|
||
|
||
def _piecewise_const_image(): | ||
w, h = 7, 10 | ||
image = np.random.uniform(0, 1, size=(w, h)) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would like to avoid uncontroled RNGs in the codebase. The right way of doing this is to create a like RNG: rng = np.random.RandomState(0) image = rng.uniform(...) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Sorry, I hadn't realized that this was an example, and not a test. I would simply seed the global RNG here: it's simpler and the side effect is less important. |
||
image -= image.min() | ||
image /= image.max() | ||
big_image = np.empty((10 * w, 10 * h)) | ||
for i in range(w): | ||
for j in range(h): | ||
big_image[10 * i:10 * i + 10, 10 * j:10 * j + 10] = image[i, j] | ||
return big_image | ||
|
||
|
||
def _random_mesh(image_shape, n_nodes=5): | ||
x = np.random.uniform(0, image_shape[0], size=(n_nodes, )) | ||
y = np.random.uniform(0, image_shape[1], size=(n_nodes, )) | ||
return np.asarray([x, y]).T | ||
|
||
|
||
def show_sampling(ball_radius=10, n_nodes=5, n_points=7, link_points=True): | ||
image = _piecewise_const_image() | ||
images = [image, image > .5, image < .5, image < .8] | ||
mesh = _random_mesh(image.shape, n_nodes=n_nodes) | ||
all_values, sample_points, _ = surf_plotting._ball_sampling( | ||
images, mesh, np.eye(3), n_points=n_points, ball_radius=ball_radius) | ||
fig, axes = plt.subplots(2, 2) | ||
axes = axes.ravel() | ||
for image, values, ax in zip(images, all_values, axes): | ||
ax.imshow(image, cmap='gray', vmin=0, vmax=1) | ||
ax.add_patch( | ||
matplotlib.patches.Rectangle( | ||
(-.5, -.5), | ||
image.shape[1], | ||
image.shape[0], | ||
fill=False, | ||
linestyle='dashed')) | ||
ax.scatter( | ||
mesh[:, 1], | ||
mesh[:, 0], | ||
c=values, | ||
s=300, | ||
cmap='gray', | ||
edgecolors='r', | ||
vmin=0, | ||
vmax=1, | ||
zorder=2) | ||
for sp, mp in zip(sample_points, mesh): | ||
ax.scatter(sp[:, 1], sp[:, 0], marker='x', color='blue', zorder=3) | ||
if link_points: | ||
for s in sp: | ||
ax.plot( | ||
[mp[1], s[1]], [mp[0], s[0]], | ||
color='red', | ||
alpha=.5, | ||
zorder=1) | ||
return fig, axes | ||
|
||
|
||
if __name__ == '__main__': | ||
ax = show_sampling() | ||
plt.show() |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -6,15 +6,127 @@ | |
# Import libraries | ||
import nibabel | ||
import numpy as np | ||
|
||
# These will be removed (see comment on _points_in_unit_ball) | ||
from sklearn.externals import joblib | ||
from ..datasets.utils import _get_dataset_dir | ||
import sklearn.cluster | ||
# / These will be removed (see comment on _points_in_unit_ball) | ||
|
||
import matplotlib.pyplot as plt | ||
|
||
from mpl_toolkits.mplot3d import Axes3D | ||
from nibabel import gifti | ||
|
||
import nilearn.image | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We prefer relative imports. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why not |
||
from .._utils.compat import _basestring | ||
from .. import _utils | ||
from .img_plotting import _get_colorbar_and_data_ranges | ||
|
||
|
||
# Eventually, when this PR is ready, the sample locations inside the unit ball | ||
# will be hardcoded. For now we just compute them in a simple way (we will find | ||
# something better than the k-means hack) and cache the result. | ||
memory = joblib.Memory(_get_dataset_dir('joblib'), verbose=False) | ||
|
||
|
||
@memory.cache | ||
def _points_in_unit_ball(n_points=20, dim=3): | ||
mc_cube = np.random.uniform(-1, 1, size=(5000, dim)) | ||
mc_ball = mc_cube[(mc_cube**2).sum(axis=1) <= 1.] | ||
centroids, assignments, _ = sklearn.cluster.k_means( | ||
mc_ball, n_clusters=n_points) | ||
return centroids | ||
|
||
|
||
# Eventually will be replaced by nilearn.image.resampling.coord_transform, but | ||
# it only works for 3d images and 2d is useful for visu and debugging. | ||
def _transform_coord(coords, affine): | ||
return affine.dot(np.vstack([coords.T, np.ones(coords.shape[0])]))[:-1].T | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. there's a function for that. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes, image.resampling.coord_transform. But it only handles 3d data, so if we |
||
|
||
|
||
def _ball_sample_locations(nodes, affine, ball_radius=3, n_points=20): | ||
"""Get n_points regularly spaced inside a ball.""" | ||
if affine is None: | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I find this a bit surprising for a non-public functions |
||
affine = np.eye(nodes.shape[1] + 1) | ||
offsets_world_space = _points_in_unit_ball( | ||
dim=nodes.shape[1], n_points=n_points) * ball_radius | ||
# later: | ||
# offset_voxels = nilearn.image.resampling.coord_transform( | ||
# *offset_voxels.T, np.linalg.inv(affine)) | ||
mesh_voxel_space = _transform_coord(nodes, np.linalg.inv(affine)) | ||
linear_map = np.eye(affine.shape[0]) | ||
linear_map[:-1, :-1] = affine[:-1, :-1] | ||
offsets_voxel_space = _transform_coord(offsets_world_space, | ||
np.linalg.inv(linear_map)) | ||
sample_locations_voxel_space = (mesh_voxel_space[:, np.newaxis, :] + | ||
offsets_voxel_space[np.newaxis, :]) | ||
return sample_locations_voxel_space, offsets_voxel_space | ||
|
||
|
||
def _ball_sampling(images, nodes, affine=None, ball_radius=3, n_points=20): | ||
"""In each image, average samples drawn from a ball around each node.""" | ||
images = np.asarray(images) | ||
nodes = np.asarray(nodes) | ||
sample_locations_voxel_space, offsets_voxel_space = _ball_sample_locations( | ||
nodes, affine, ball_radius=ball_radius, n_points=n_points) | ||
sample_indices = np.asarray( | ||
np.floor(sample_locations_voxel_space), dtype=int) | ||
pad_size = int(np.ceil(np.abs(offsets_voxel_space).max())) | ||
pad = [(0, 0)] + [(pad_size, pad_size)] * nodes.shape[1] | ||
sample_slices = [slice(None, None, None)] + np.s_[[ | ||
ax + pad_size for ax in np.rollaxis(sample_indices, -1) | ||
]] | ||
try: | ||
samples = np.pad( | ||
images, pad, mode='constant', | ||
constant_values=np.nan)[sample_slices] | ||
texture = np.nanmean(samples, axis=2) | ||
if not(np.isfinite(texture).all()): | ||
raise IndexError() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Perhaps consider adding a more verbose error message There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. +1. This will not help the user. |
||
except IndexError: | ||
raise ValueError('Some nodes of the mesh are outside the image') | ||
return texture, sample_indices, sample_locations_voxel_space | ||
|
||
|
||
def niimg_to_surf_data(image, mesh_nodes, ball_radius=3.): | ||
"""Extract surface data from a Nifti image. | ||
|
||
Parameters | ||
---------- | ||
|
||
image : niimg-like object, 3d or 4d. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. n capital Niimg |
||
|
||
mesh_nodes : array-like,shape n_nodes * 3 | ||
The coordinates of the nodes of the mesh. | ||
|
||
ball_radius : float, optional (default=3.). | ||
The radius (in mm) of the ball over which image intensities are | ||
averaged around each node. | ||
|
||
Returns | ||
------- | ||
texture: array-like, 1d or 2d. | ||
If image was a 3d image (e.g a stat map), a 1d vector is returned, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Simply "If image was a 3D image" to "If 3D image is provided" |
||
containing one value for each mesh node. | ||
If image was a 4d image, a 2d array is returned, where each row | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Same here: "If image was a 4d image" to "If 4D image is provided" There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I haven't checked doing surface plotting with functions we have using 4D stat map image. Did you checked it works ? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I checked it words, and in test_surf_plotting.test_niimg_to_surf_data, I check we get the same result when we project an image by itself or as part of a 4d image. Maybe it needs more tests? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I haven't gone through tests yet. I tried to do with ica maps but could not. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If it's easy could you send me the failing example? |
||
corresponds to a mesh node. | ||
|
||
""" | ||
image = nilearn.image.load_img(image) | ||
original_dimension = len(image.shape) | ||
image = _utils.check_niimg(image, atleast_4d=True) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. May be we could simply use this in the first line and remove nilearn.image.load_img There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I want to load it first to get its original shape. How would you do it? |
||
frames = np.rollaxis(image.get_data(), -1) | ||
texture, indices, locations = _ball_sampling( | ||
frames, | ||
mesh_nodes, | ||
_utils.compat.get_affine(image), | ||
ball_radius=ball_radius) | ||
if original_dimension == 3: | ||
texture = texture[0] | ||
return texture.T | ||
|
||
|
||
# function to figure out datatype and load data | ||
def load_surf_data(surf_data): | ||
"""Loading data to be represented on a surface mesh. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need a docstring with a title, for the gallery. Sphinx-gallery will not accept to build a gallery item from this.