New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Automatic mesh generator: can't figure out how to run neuroglancer #79
Comments
Can you explain a bit more exactly what you've done and what you observe? Have you tried running the example file python/examples/example.py ? If you set neuroglancer.server.debug = True, you will see debug logs from the server, which may provide useful information. |
Regarding nyroglancer, I think it unfortunately does not support the automatic meshing. The error you list at the top of init takes 3 positional arguments is a recent breakage in sockjs-tornado due to the release of tornado 5.0 just a few days ago. To fix that, you could either downgrade tornado to 4.5.3 via or install this version of sockjs tornado from github. https://github.com/mathben/sockjs-tornado/tree/fix_tornado_5.0_%23113 You can do that with this command: |
Thank you, @jbms. I installed the patch from git using the command you've provided. (neuro) [thoth@host neuroglancer]$ python python/examples/example.py -a 0.0.0.0
http://host:38474/v/fbd8fb5da6fa030ee4f82d24164425f487ce6320/
(neuro) [thoth@host neuroglancer]$ echo $?
0 I enabled debugging as you have suggested earlier - but that changed no behavior. |
Run it with python -i so that python stays running. |
Thank you,
|
Are you sure you are using the right URL? It changes every time. |
yes, I double checked it. I also downgraded tornado to make sure that it's not patch failure. Was no help as well. |
Unfortunately I'm not sure what the issue is. What I'd recommend is that you install neuroglancer in development mode, by (per instructions in python/README.md) cloning the git repository, and then running:
Then in python/neuroglancer/server.py, I'd recommend adding print statements to the get method of the StaticPathHandler class so as to be able to get a better idea of what is going on. |
Thank you, @jbms. class StaticPathHandler(BaseRequestHandler):
def get(self, viewer_token, path):
print(f"{viewer_token} at {path}")
print(f"{self.server.token} is server token")
print(f"{self.server.viewers} are server viewers")
if viewer_token != self.server.token and viewer_token not in self.server.viewers:
self.send_error(404)
return
try:
print(f"global={global_static_content_source}")
data, content_type = global_static_content_source.get(path)
except ValueError as e:
self.send_error(404, message=e.args[0])
return
self.set_header('Content-type', content_type)
self.finish(data) And it produced this during the test: python -i examples/example.py
http://127.0.0.1:40671/v/2956cea95659d30405f79bae0217b64ca20265be/
>>> 2956cea95659d30405f79bae0217b64ca20265be at
2c479cad54909b2caf87caed44d4c4117b5d7b3d is server token
<WeakValueDictionary at 0x2ae698f796a0> are server viewers
global=<neuroglancer.static.PkgResourcesContentSource object at 0x2ae698f8b400>
2956cea95659d30405f79bae0217b64ca20265be at styles.css
2c479cad54909b2caf87caed44d4c4117b5d7b3d is server token
<WeakValueDictionary at 0x2ae698f796a0> are server viewers
global=<neuroglancer.static.PkgResourcesContentSource object at 0x2ae698f8b400>
2956cea95659d30405f79bae0217b64ca20265be at main.bundle.js
2c479cad54909b2caf87caed44d4c4117b5d7b3d is server token
<WeakValueDictionary at 0x2ae698f796a0> are server viewers
global=<neuroglancer.static.PkgResourcesContentSource object at 0x2ae698f8b400> As you can see the viewer is not matching server token. More over if I use server token in place of viewer (added one more print to see the server token) - then everything the same happens. Please keep helping... Thanks in Advance. |
It is confusing that requests are coming in for styles.css and main.bundle.js, if you are just getting a 404 error --- since the browser would not know to fetch those paths if it didn't receive the index.html file. That suggests that StaticPathHandler may not in fact be returning a 404 error --- and you can verify that by adding additional print statements after the data, content_type = ... line, and also in the except handler. You might try using the Chrome or Firefox developer tools to investigate what network requests are coming through. |
I have observed flickering like that when Chrome falls back to software
rendering using swiftshader. I don't know the cause, but rendering tends
to be too slow to be usable with swiftshader anyway.
Go to webglreport.com to see what driver is being used for webgl.
Neuroglancer should work with Intel and Nvidia graphics at least.
Chrome will fall back to Swiftshader if it is unable to load the hardware
rendering driver. Sometimes that can happen, at least on Linux, if your
graphics driver has been updated since you last rebooted, and is fixed by
rebooting your computer.
…On Fri, Mar 16, 2018, 23:21 Anar Z. Yusifov ***@***.***> wrote:
Ok.
I've tried chrome now and it kinda works (with lot's of flickering).
I deleted everything and started over. Still flickering.
I don't know if it suppose to be like that. (ignore the green color - it's
artifact of gifmaker - but flickering is true).
[image: neuroglancer]
<https://user-images.githubusercontent.com/2971670/37552348-44f6fd1e-2981-11e8-8c99-e421ff43bb24.gif>
Thank you for your help!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEBE6qQ9FzvzwRdTEzAkC4zZth1IPYYJks5tfKt8gaJpZM4SiAuV>
.
|
When I do
I'll see how I can fix these hardware rendering issues and will do some of my tests again. Thanks again! |
Thank you for your help. Finally after all the technical issues (on my side) we can come back to the actual topic of this question.
questionBut the format and shape of overlay image |
Neuroglancer can work with uint16 single channel/3-d images. Just remove
the custom shader property and the offsets.
…On Sat, Mar 17, 2018, 10:53 Anar Z. Yusifov ***@***.***> wrote:
Thank you for your help.
I installed new version of graphical drivers and went to this tab in
chrome chrome://flags/ to override the software rendering settings.
It works!
Finally after all the technical issues (on my side) we can come back to
the actual topic of this question.
In the example.py I see 2 arrays:
- a: 4D - (3,z,y,x) - uint8 dataset
- b: 3D - (z,y,x) - uint32 dataset
I assume that b is used for automatic mesh generation. In my case that
will be segs. Thankfully the format and shape is matching.
question
But the format and shape of overlay image a and raw are not matching in
my case.
*What should I do to my raw (if I need to do anything at all) for it to
work as it works for a in example.py (if I understood it correctly)?*
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEBE6mm8jc6DWNj7UPARXkjEetXuHKIxks5tfU2ngaJpZM4SiAuV>
.
|
Thank you, @jbms questions
Thank you very much for going through this with me! |
It looks like you are displaying the raw data as a SegmentionLayer as well,
rather than an ImageLayer.
…On Sat, Mar 17, 2018, 13:33 Anar Z. Yusifov ***@***.***> wrote:
Thank you, @jbms <https://github.com/jbms>
At first it looked not so right - I expected grayscale, though.
[image: example_slice]
<https://user-images.githubusercontent.com/2971670/37559703-791f6546-29f8-11e8-894d-35868b3c21ee.gif>
Once I asked for the mesh I got some strange behavior.
It looks like there is a lot of noise in my data. Or I interpret it the
wrong way?
I just don't understand where it comes from - my raw data or my segs data?
And another thing I noticed is low CPU utilization (top shows only 200%
CPU out of 7200% possible - it's skylake node) during mesh generation.
There were also quite a lot of memory utilization, but I assume that it's
due to noise...
[image: example_crop]
<https://user-images.githubusercontent.com/2971670/37559547-1c85d1f0-29f6-11e8-9fbb-43639a5647af.gif>
questions
- Why *I don't see nice gray scale background* behind the segments as
in demos on the page?
- I assume it's because my format is uint16
- if so - should I normalize it to uint8 first?
- Do I generate mesh correctly?
- I double click on the highlighted segment
- *Do I need to do any pre-processing to speed it up?*
Thank you very much for going through this with me!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEBE6ujXFCNrZxTMjH3HpThtJPqqNB_5ks5tfXMdgaJpZM4SiAuV>
.
|
Regarding the mesh generation, the fact that you were displaying both the
raw data and your segmentation as segmentations may have affected things.
However, in general the mesh generation is unfortunately slow. There are
two steps --- there is an initial marching cubes step that runs over the
full volume, using multiple threads, that runs the first time you request
any mesh, and then for each individual segment there is a simplification
step that runs on a single thread and happens the first time you request a
given segment.
The python integration doesn't support a way to precompute the meshes, and
is only practical for small volumes. For larger volumes you can convert
the data to the precomputed format.
https://github.com/google/neuroglancer/blob/master/src/neuroglancer/datasource/precomputed
There are some third party scripts to help you generate that format --- see
e.g.
https://github.com/FZJ-INM1-BDA/neuroglancer-scripts
…On Sat, Mar 17, 2018, 13:41 Jeremy Maitin-Shepard ***@***.***> wrote:
It looks like you are displaying the raw data as a SegmentionLayer as
well, rather than an ImageLayer.
On Sat, Mar 17, 2018, 13:33 Anar Z. Yusifov ***@***.***>
wrote:
> Thank you, @jbms <https://github.com/jbms>
> At first it looked not so right - I expected grayscale, though.
> [image: example_slice]
> <https://user-images.githubusercontent.com/2971670/37559703-791f6546-29f8-11e8-894d-35868b3c21ee.gif>
> Once I asked for the mesh I got some strange behavior.
> It looks like there is a lot of noise in my data. Or I interpret it the
> wrong way?
> I just don't understand where it comes from - my raw data or my segs
> data?
> And another thing I noticed is low CPU utilization (top shows only 200%
> CPU out of 7200% possible - it's skylake node) during mesh generation.
> There were also quite a lot of memory utilization, but I assume that it's
> due to noise...
> [image: example_crop]
> <https://user-images.githubusercontent.com/2971670/37559547-1c85d1f0-29f6-11e8-9fbb-43639a5647af.gif>
> questions
>
> - Why *I don't see nice gray scale background* behind the segments as
> in demos on the page?
> - I assume it's because my format is uint16
> - if so - should I normalize it to uint8 first?
> - Do I generate mesh correctly?
> - I double click on the highlighted segment
> - *Do I need to do any pre-processing to speed it up?*
>
> Thank you very much for going through this with me!
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> <#79 (comment)>,
> or mute the thread
> <https://github.com/notifications/unsubscribe-auth/AEBE6ujXFCNrZxTMjH3HpThtJPqqNB_5ks5tfXMdgaJpZM4SiAuV>
> .
>
|
You were completely right. I might come back very soon with some more questions - but for now - it looks and does exactly what I would expect! last minor questionOne minor thing, though, is that Image layer is not as bright as I would expect - is there any way to control the colormap of it so that more active range of the image would be used? |
First, you generally want the image layer before the segmentation layer --
otherwise it displays based on its opacity setting which defaults to 0.5.
Aside from that, there is no explicit setting bt you can adjust the
contrast and brightness by modifying the shader.
…On Sat, Mar 17, 2018, 14:23 Anar Z. Yusifov ***@***.***> wrote:
You were completely right.
Once I attributed my data with Image and Segmentation layers - it's all
worked!
Not only much more beautiful right now - but also much faster.
I clearly see that it takes a first touch to the global mesh - but then
local selected meshes are generated very fast (comparing to before).
So I would say that this issue is no longer an issue!
I might come back very soon with some more questions - but for now - *it
looks and does exactly what I would expect!*
last minor question
One minor thing, though, is that Image layer is not as bright as I would
expect - is there any way to control the colormap of it so that more active
range of the image would be used?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEBE6gXpljjmRpdCqBQopBiFS-wzGBReks5tfX7rgaJpZM4SiAuV>
.
|
Order of layers is matters, indeed! Well, that was somewhat unexpected, but reasonable. one more last question(I should stop calling them Thank you very for your help! |
See this documentation regarding custom image layer shaders: The layers are rendering in the order they are listed, and by default blended as: dest_value = old_dest_value * (1 - src_alpha) + src_value * src_alpha not_selected_alpha and selected_alpha determine the alpha values for the segmentation layer. object_alpha affects the 3-d rendering only. Yes, the annotationlayer lets you do exactly that --- see the discussion in #78 and this example link: https://goo.gl/vhcUXE There isn't currently a great way to trigger mesh generation in advance, although you could manually call the get_object_mesh method of LocalVolume. |
Thank you for the examples and documentation - will experiment to see how far I can get. Couldn't figure out how to use Thank you for very detailed answers! |
|
Thank you for this, @jbms. Traceback (most recent call last):
File "example.py", line 85, in <module>
vol.get_object_mesh(0)
File "neuroglancer/python/neuroglancer/local_volume.py", line 243, in get_object_mesh
raise InvalidObjectIdForMesh()
neuroglancer.local_volume.InvalidObjectIdForMesh Is there any alternative to this? |
Even though you receive the error, the first step, the marching cubes, is
still done. To do the simplification as well, you would need to call
get_object_mesh for every object id.
…On Mon, Mar 19, 2018 at 9:13 PM Anar Z. Yusifov ***@***.***> wrote:
Thank you for this, @jbms <https://github.com/jbms>.
I have got this error:
Traceback (most recent call last):
File "example.py", line 85, in <module>
grainsVol.get_object_mesh(0)
File "neuroglancer/python/neuroglancer/local_volume.py", line 243, in get_object_mesh
raise InvalidObjectIdForMesh()
neuroglancer.local_volume.InvalidObjectIdForMesh
Is there any alternative to this?
May be an additional flag to the SegmentationLayer which would force mesh
generation... But I don't know how to implement it...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEBE6gJBDiF8Jtwdp3-XAH2HZab0KCwxks5tgIHvgaJpZM4SiAuV>
.
|
I put a try/except and it indeed worked! Thanks!
Don't follow this point... And even if I would understand what you meant - I still don't know where get the list of objects from... |
I think this is related enough to include in this thread -- but if not I can create a new issue. Is there a way to export the mesh that neuroglancer python creates and then load it in from a static URL? I've been unsuccessful in determining what the output of get_object_mesh is (raw byte string)? |
The format is identical to the precomputed mesh format, documented here: If you write the output to a file, then create the appropriate manifest JSON file for each object, you can view it as a precomputed mesh source. |
Thank you, @jbms for your help! |
Use the Slices checkbox or press s.
…On Thu, May 17, 2018, 08:34 Anar Z. Yusifov ***@***.***> wrote:
Thank you, @jbms <https://github.com/jbms> for your help!
How to remove visualization of orthogonal 3D planes?
The reason I need it is that I can browse through my data in 3 other
windows but would like to see 3D generated object clearly in my 3D view
without panels going into my sight.
May be I can set transparency for the panels or somehow disable them in 3D
view?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#79 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEBE6isOUB1xLepL4EK4LuhPz_Okhzebks5tzZh5gaJpZM4SiAuV>
.
|
I have tried few examples so far and couldn't make them work in my case:
But I don't know where to start from in order to make neuroglancer to work in my case.
Could anyone please help me?
Thanks in Advance,
Anar.
The text was updated successfully, but these errors were encountered: