Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating root mesh #21

Closed
FedeClaudi opened this issue May 28, 2020 · 25 comments
Closed

Creating root mesh #21

FedeClaudi opened this issue May 28, 2020 · 25 comments

Comments

@FedeClaudi
Copy link
Contributor

Hey,
for the human brain atlas, there's no 'root' mesh, I think that the annotated volume only has the labels of the 'leafs' of the structure tree (the same goes for the rat atlas):

image

So I'm creating the root mesh by taking all voxels that have value >0, setting them to 1 and reconstructing the surface from that. However, in the case of the human brain there is still some structure 'under the brain surface' which makes the root mesh kinda ugly:
image

Does anyone know of a better way to create the root mesh?

@adamltyson
Copy link
Member

It appears to work OK in ITK-snap, I don't know what's different. Is the image volume solid before mesh extraction?

@FedeClaudi
Copy link
Contributor Author

The surface of the mesh looks fine:
image

The problem is that the root is rendered semi transparent usually, but if I do that all the internal structure shows:
image

This is the inside after cutting out the frontal lobes

@adamltyson
Copy link
Member

Can you use something like scipy.ndimage.morphology.binary_fill_holes before extracting the surface?

@FedeClaudi
Copy link
Contributor Author

I can sure try

@vigji
Copy link
Member

vigji commented May 28, 2020

The zebrafish atlas has the same problem. I reported that and trying to see whether they want to fix it but I am wondering whether, as the root meshes have to render "look nice" we should potentially have a passage through blender to clean them up.

@vigji
Copy link
Member

vigji commented May 28, 2020

It would also potentially fix the "lego aspect"

@FedeClaudi
Copy link
Contributor Author

That's also a possibility. Though ideally something automated because:

  • if I want to create meshes for other non-leaf structures, I will probably have the same problem
  • if people want to create their own atlas...

@adamltyson
Copy link
Member

I think standard image processing (filling etc) should be fine to create OK meshes.

There's a question about whether we want to spend time making them as attractive as the Allen meshes. It shouldn't be hard to do though, apparently it's all in VTK.

@FedeClaudi
Copy link
Contributor Author

It's worth giving a shot to the way Allen did it. anorak got in touch for help with brainrender once, maybe he found a way to create clean meshes in the meanwhile

@adamltyson
Copy link
Member

This would be cool for the combined ara/f-p too because those meshes aren't great

@FedeClaudi
Copy link
Contributor Author

Okay, I'll work on making prettier meshes tonight, then we can apply it recursively to structures.json if necessary

@vigji
Copy link
Member

vigji commented May 28, 2020

Okay, I'll work on making prettier meshes tonight, then we can apply it recursively to structures.json if necessary

Love our nightly sprints 😄 Amazing - if you figure that out I will steal everything for the zfish atlas :)

@FedeClaudi
Copy link
Contributor Author

Sure, if we find a way that works it will be a standalone function/piece of code that prettifies meshes so can be used whenever

@FedeClaudi
Copy link
Contributor Author

mmm I failed. I tried morphological transformations but I could't get a way to get rid of the inside stuff that didn't completely alter the mesh' shape.

I couldn't get vtk marching cubes to work, I can't figure out how to use vtk so I've used vtkplotter.Volume.isosurface to extract the mesh, but it's still full of stuff.

I tried decimating and smoothing the mesh, and while that helps it still didn't get rid of the stuff inside.

any other ideas? I'll try again tomorrow maybe with a fresher brain

@vigji
Copy link
Member

vigji commented May 28, 2020

For the morphological transformations approach: I know there is a transformation that is supposed to find the external surface of a binary image with holes. I am trying to dig out the name

@FedeClaudi
Copy link
Contributor Author

let me knowif you find anything

@vigji
Copy link
Member

vigji commented May 29, 2020

This was the algorithm I was thinking about https://en.wikipedia.org/wiki/Active_contour_model.

I guess there's mesh generation routines created from it but I have not digged into these options - I hope you can find something!

@FedeClaudi
Copy link
Contributor Author

FedeClaudi commented May 29, 2020

Making some progress...

I've found an implementation of active contour model, but it was very slow. In the process I've discovered this neat marching cubes implementation whose 'smooth' option works well (though it's a bit slow).

That, combined with some morphological operations and a bit of vtkplotter magic yields better looking meshes (the red is the inside):
image

There's a bit of tradeoff between cleaning up internal srtucture and introducing artifacts though.

Btw this approach adds vtkplooter and pymcubes as requirements, but they can be just for the [dev] option for people wanting to develop atlases with our tools.

@adamltyson
Copy link
Member

Looks nice.

Btw this approach adds vtkplooter and pymcubes as requirements, but they can be just for the [dev] option for people wanting to develop atlases with our tools.

I think this is fine, we can have everything in the [dev] list of requirements, but be careful what goes into the main list.

@vigji
Copy link
Member

vigji commented May 29, 2020

As a tangential note, This raises in general the point of whether we should use the meshes to calculate if a point is in a region.
My general approach would be no, but I would like to know what you guys think as @FedeClaudi was using meshes for such calculations in brainrender and it's important that such things are handled in a single point not to have too many moving parts and mesh generation produces ambiguity

@FedeClaudi
Copy link
Contributor Author

I don't mind either way.
I was using different things for different functions

  1. to know in which region a given point is, I was using this:
    https://github.com/BrancoLab/BrainRender/blob/5d71c7026e42f39ed461d039a7f5f9ed5aaa68fe/brainrender/atlases/aba.py#L838
    which uses the annotated volume
  2. to know if a point is in a given region, I was using the mesh loaded as vtkplotter Mesh

I agree that all of this operations should be handled by core.Atlas

@adamltyson
Copy link
Member

As a tangential note, This raises in general the point of whether we should use the meshes to calculate if a point is in a region.

I would prefer that we use the image (as is done with amap), for two reasons:

  • In cellfinder we are usually more interested in what region a given point belongs to, and I think the image is more efficient for that.
  • Many of the atlases we will use are defined as images. The mesh extraction is unlikely to be 100% perfect, and so there could be overlaps or gaps between meshes.

I would vote for using the atlas image for analysis purposes (when accuracy is important), and then this allows us to post-process the meshes so that they are more attractive, without worrying about them being perfectly accurate.

@FedeClaudi
Copy link
Contributor Author

sounds good

@FedeClaudi
Copy link
Contributor Author

Mesh generation should be improved as of #27

@FedeClaudi
Copy link
Contributor Author

Should be fixed by #27

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants