Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation request: how are the files in cortical_coordinates generated? #44

Open
lguyot opened this issue Dec 5, 2019 · 3 comments
Open

Comments

@lguyot
Copy link

@lguyot lguyot commented Dec 5, 2019

Dear contributors to the mouse_connectivity_models,

I am looking for a specific information about the files located in cortical_coordinates.

The process by which a curved cortical coordinate system has been obtained is documented on pages 6 and 7 of the technical white paper:

After the borders of isocortex were defined, Laplace’s equation was solved between pia and white matter surfaces resulting in intermediate equi-potential surfaces (Figure 4A). Streamlines were computed by finding orthogonal (steepest descent) path through the equi-potential field (Figure 4B). Information at different cortical depths can then be projected along the streamlines to allow integration or comparison. Streamlines were used to facilitate the annotation of the entire isocortex, including higher visual areas.
...
Annotation of the Isocortex in 3-D Space. The isocortex was annotated from surface views using the curved cortical coordinate system described above. The curved cortical coordinate system has an advantage in that it allows the translation of any point from 2-D surface views into 3-D space or vice versa. Thus, mapping isocortex from surface views is a different approach
compared with conventional 3-D mouse brain atlases that are built from a series of 2-D coronal sections, such as the ARA (Figure 1, version 2).

Although I find the above process clear, I cannot infer from this description how the pia and the white matter surfaces have been flattened to fit in a 2D numpy array such as those of dorsal_flatmap_paths_100.h5 and top_view_paths_100.h5. In other words, what is the recipe to build the arrays view_lookup held by these files? Was some area-preserving transformation applied to the dorsal and top surfaces of the isocortex volume? If so, which one?

Many thanks in advance for your help,
Luc.

As a side note:

import h5py
In [2]: with h5py.File('~/Downloads/dorsal_flatmap_paths_100.h5', 'r') as f:
   ...:             view_lookup = f['view lookup'][:]
   ...:             paths = f['paths'][:]
   ...:     

In [3]: view_lookup.shape
(136, 272)

The returned shape doesn't match the expected value of (132, 114) that is indicated in cortical_map.py.

@kharris

This comment has been minimized.

Copy link
Contributor

@kharris kharris commented Dec 10, 2019

Hi Luc,

Thanks for your questions. As for how the paths were generated, this was not part of our work but something that the team of @lydiang carried out. So I'm afraid I do not know exactly what process occured besides what you've mentioned in the whitepaper.

As for the size of view_lookup, the reason that the shape does not match is because you are looking at the flatmap version. The 3-d images to transform will always be shaped as (132, 80, 114), but they get projected into an image that is (136, 272). If you use the top view paths instead, the returned size should be (132, 114), since it just aggregates over the second index.

This appears to be a documentation error in cortical_map.py, so thanks for pointing to that.

@lguyot

This comment has been minimized.

Copy link
Author

@lguyot lguyot commented Dec 11, 2019

Hi Kameron,

Thanks for the info and thanks for the documentation fix.
So I'll wait a little bit until @lydiang can reply here.

Best regards,
Luc

@lguyot

This comment has been minimized.

Copy link
Author

@lguyot lguyot commented Jan 29, 2020

Thanks to a colleague of mine, Sirio Puchet (Blue Brain Project), I have now more insight on this question. Indeed, Sirio pointed out that some details on the flattening process are available in the section Creation of the cortical top-down and flattened views of the CCFv3 for data visualization of Hierarchical organization of cortical and thalamic connectivity (https://www.nature.com/articles/s41586-019-1716-z):

A cortical flatmap was also constructed to enable visualization of anatomical and projection information while preserving spatial context for the entire cortex. The flatmap was created by computing the geodesic distance (the shortest path between two points on a curve surface) between every point on the cortical surface and two pairs of selected anchor points. Each pair of anchor points forms one axis of the 2D embedding of the cortex into a flatmap. The 2D coordinate for each point on the cortical surface is obtained by finding the location such that the radial (circular) distance from the anchor points (in 2D) equals the geodesic distance that was computed in 3D. This procedure produces smooth mapping of the cortical surface onto a 2D plane for visualization. This embedding does not preserve area and the frontal pole and medial-posterior region is highly distorted. As such, all numerical computation is done in 3D space. Similar techniques are used for texture mapping on geometric models in the field of computer [57].

[57] Oliveira, G. N., Torchelsen, R. P., Comba, J. L. D., Walter, M. & Bastos, R. Geotextures: a multi-source geodesic distance field approach for procedural texturing of complex meshes. 2010 23rd SIBGRAPI Conf. Graphics, Patterns and Images 126–133 (IEEE, 2010).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
2 participants
You can’t perform that action at this time.