-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Questions on obj file format conversion and nrrd file format support #16
Comments
Hi m, Yes it does support converting obj. The CloudVolume Mesh object has the same properties too. from zmesh import Mesh
obj = load_obj() # as string
mesh = Mesh.from_obj(obj)
binary = mesh.to_precomputed() We also support the binary ply format as well. We don't currently have support for the NRRD file format. I personally haven't used it though I've seen other people in the field using it. I googled around and saw that there's a Python library called pynrrd which might be useful for you. Once you have a zmesh Mesh object (or a CloudVolume Mesh object), you can access the underlying numpy arrays easily:
|
Thank you Will, sorry it took me awhile to get to the datasets. Forgive me for my slow reply. Two questions I have. If I understand correctly, once I save Thank you, |
Hi m, Once you save the info file and the mesh in a file named Will |
Hi Will, I'm trying to understand the precomputed file structure with mesh data and I'm having problems reading files into CloudVolume or zmesh, and convert them to wavefront obj files. What I'm trying to accomplish is to convert
I think I am confused here what to provide as a
CloudVolume does not seem to find segmentations. Am I pointing to the wrong directory or am I missing something in
Thanks, |
Hi m, You're really close! Try something more like this and make sure there are at least two directories in the path.
CV path issue: seung-lab/cloud-volume#391 You can also do:
Let me know if you need more tips! |
Hi Will, Hmm... I think I have at least two directories but still seem to get the same errors. Any thoughts?
Thanks, |
Hi Will, when I print
Although, I don't think I have |
Yes, you can either rename the "mesh" property in the info file or rename
the directory.
…On Mon, Nov 30, 2020, 8:35 PM manoaman ***@***.***> wrote:
Hi Will, when I print manifest_paths from the code, CloudVolume seems to
be looking for these paths.
['mesh/32767:0', 'mesh/65534:0']
Although, I don't think I have mesh folder generated from Igenous. What I
see is mesh_mip_0_err_40. Do I need to rename this folder to mesh?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#16 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AATGQSLDWU4J4TIEGIEEK2DSSRB5VANCNFSM4SZBIYJQ>
.
|
Great, that seems to be working! Thanks Will, -m |
Glad that helped! |
Hi Will, When I generate precomputed mesh files, the top level mesh (root, 997.obj) of the Allen CCF annotation appear differently on Neuroglancer. Is this expected? I tried Igneous mesh task to process tiff split files from the original nrrd. |
This is not expected lol. Can you show me your processing steps in more detail? I'm not familiar with the Allen model format. |
Sure. It's been awhile since I processed the files, and this is before I started using igneous cli. (Sorry, not sure which version of the igneous I used at the time.) I believe it is pretty straight forward steps I took. The other segmentation meshes appear good to me on Neuroglancer. I just happen to realize only the root (997) come out differently. Steps:
|
nrrd to obj(ply,stl) is there any python code for the same?? |
@kpbhat25 I haven't tried nrrd to obj conversion. obj/ply files are here as far as I know. https://download.alleninstitute.org/informatics-archive/current-release/mouse_ccf/annotation/ccf_2017/structure_meshes/ |
Hi guys, sorry I just don't have the bandwidth to help with this right now. My sincere apologies. |
Hi @william-silversmith any clue so far? Perhaps missing parameter during a mesh task? |
Hi m, I took a quick look and noticed two things about the dataset. One, during ingest, the tiffs are uint32s, but the ingest code is uint16 which causes warnings to appear about integer overflow. I fixed that and looked at the data itself. The resulting mesh is pretty ugly, but this gap does appear to be a legit representation of the underlying mesh for ID 997. Are you absolutely sure that pretty mesh you showed above was really derived from annotation_50.nrrd? |
@kpbhat25 sorry it took me a while to get back to you too. You can follow m's code as a guide. Once you have a Neuroglancer volume generated with meshes, you can use CloudVolume to convert them to OBJ using
There are probably some shortcut ways to do this too if your data fits in memory, but I don't really know your problem that well. |
Thanks for catching the data type incompatibility, I missed that part. To be honest with you, I don't know if underlying mesh for 997 is derived from a 50 nrrd file. There are 10, 25, 100(um) resolution nrrd files in the same folder which I haven't checked yet. It is kind of strange to think that only this mesh appear differently though. Perhaps 997 mesh is intentionally added separately? |
The thing is the 977 mesh is an enclosing mesh, so if there was a
downsampling operation, it could have thinned out and disappeared crucial
information about the envelope. Why don't you give some of the other
volumes a try and let me know how it goes? I'll be on vacation for the next
3 weeks though.
…On Tue, Jan 17, 2023 at 4:43 PM manoaman ***@***.***> wrote:
Hi @william-silversmith <https://github.com/william-silversmith>
Thanks for catching the data type incompatibility, I missed that part. To
be honest with you, I don't know if underlying mesh for 997 is derived from
a 50 nrrd file. There are 10, 25, 100(um) resolution nrrd files in the same
folder
<https://download.alleninstitute.org/informatics-archive/current-release/mouse_ccf/annotation/ccf_2017/>
which I haven't checked yet. It is kind of strange to think that only this
mesh appear differently though. Perhaps 997 mesh is intentionally added
separately?
—
Reply to this email directly, view it on GitHub
<#16 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AATGQSKXV5AJTQMNLGUILF3WS4HANANCNFSM4SZBIYJQ>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Interesting point. Let me give it a try with other volumes too just in case. One thing to verify on the "data type", is this simply editing the "info" file or should I be running CloudVolume/Igneous again for downstream chunking? And I suppose always configure info file with tiff's data type before running? e.g.) 8-bit Grayscale ---> uint8 Have a great vacation @william-silversmith !! |
Hi m, I think you need to re-run CV/Igneous as the buffer read that translated the tiff files to numpy arrays was potentially corrupted. You also have to change the info file. Yes, you have to make sure the data types match every time. Thanks for the well wishes and good luck! |
I was wondering if could consolidate all the structure mesh files (.obj), and then write to a precomputed format. And then run igneous commands ( Now, I remember CloudVolume handles each individual
or maybe pass numpy to CloudVolume and start from there?
Thanks, |
Hi m, I think you're on the right track. with open("997.obj", "rt") as f:
obj = f.read()
with open("998.obj", "rt") as f:
obj2 = f.read()
mesh = Mesh.from_obj(obj)
mesh2 = Mesh.from_obj(obj2)
m = Mesh.concatenate(mesh, mesh2)
m.segid = 1
cv = CloudVolume(...)
cv.mesh.put(m) |
Hi Will (@william-silversmith), It seems like these .obj files contain "e" notation in the vertices and mesh.py throws an error in converting mesh. How should I handle these "e" values?
|
I guess I should update the OBJ parser to handle that. Will do that if I get a chance today. |
Thank you @william-silversmith !!! |
Check out version 1.6.2 |
Thank you Will for the latest updates on the zmesh library. Will there be updated cloud-volume (from_obj, mesh.py) release of this version as well? |
I'll try to also make that change there as well. |
Hi @william-silversmith , I think Neuroglancer does not like the way I create an
|
Hi m, I think you have to set the Will |
Hi Will (@william-silversmith) , I think the Neuroglancer is expecting chunk files instead of a single file in a mesh folder from the way I configured. Here is my (info file)
(Generated files)
|
What is "PRs"?? Okay! I've been using Google Colab to data wrangle this part of the file creation at the moment. It seems like I need to explicitly downgrade the numpy version to match with Python 3.8.x but managed to use CloudVolume with Colab. e.g.) "!pip install -U numpy==1.22.4 cloud-volume" |
Hi m, Neuroglancer is just looking for the image files that aren't there. You can ignore those errors. If the mesh isn't appearing, make sure you're using A PR is a Github Pull Request. It means a contribution that is added to CloudVolume, that gets reviewed by me, and then merged into the codebase. Will |
Hi Will, It does seem to be loading looking at Neuroglancer chunk statistics. (Ignoring the last row because there is no image files.). One missing component was an (mesh/info)
|
Hi Will (@william-silversmith ), After some trials in the transformation matrix in the viewer, I managed to load up the concatenated mesh! So going back to the original question, precomputed mesh from .obj file seems to displayed the whole brain as expected. (997.obj). I suppose I should use .obj instead of slicing NRRD to TIFF stack, and then run CV/Igneous for generating mesh here. On the side note, do CV/Igneous support multi-resolution mesh from .obj files? Thank you for your help! |
Hi m, Congrats on getting it working! Currently, creating multires from an obj is not supported as a workflow, but there's no reason it couldn't in principle be done. I don't currently have the bandwidth to modify the code myself, but if you would like, you can look at how unsharded multires files are created and adapt that code. https://github.com/seung-lab/igneous/blob/master/igneous/tasks/mesh/multires.py#L45-L81 |
Hi Will,
More related to generating mesh in precomputed file format workflow Questions on understanding the workflow for generating mesh "precomputed" data #406, does zmesh support converting a collection of obj file format to one or more precomputed files? Basically looking to do the other way around of the following code. Also, is nrrd file format supported to generate mesh?
Thank you,
-m
The text was updated successfully, but these errors were encountered: