Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Precomputed Volumes able to decode Draco encoded meshes #131

Closed
wants to merge 1 commit into from

Conversation

manuel-castro
Copy link
Contributor

At SeungLab we are moving to encoding our meshes using draco to decrease storage and user downloads. This PR enables precomputed volumes are now able to decode Draco meshes, as well as the
gzip vertices and faces format.

* Added Draco WebAssembly Decoder to Precomputed

Precomputed volumes are now able to decode Draco meshes, or the old
gzip vertices and faces format.
@jbms
Copy link
Collaborator

jbms commented Apr 4, 2019

This is great, thanks! We experimented internally with using draco compression but didn't get around to adding it to the public client, partly due to concerns about increasing the bundle size. However, Larry already proved the viability of code splitting with regards to tensorflow.js in the computed datasource, so I think a similar approach could also work.

I think it would be very helpful to rely on metadata or a source parameter to indicate the mesh format rather than trying to detect it after the fragment is received. In particular, the current approach of always first trying to decode the mesh as draco is problematic if code splitting is used since then the draco module would always have to be downloaded.

Another thing to consider is that often GPU memory is even more of a limit than network bandwidth. In this branch (#129) I have started to implement a few improvements in that regard:

  1. encoding normal vectors using 2 8-bit values rather than 3 32-bit values (this is very cheap, and orthogonal to draco encoding since normals are computed client-side anyway)
  2. using 16-bit indices rather than 32-bit indices when the number of vertices does not exceed 65534,
  3. and converting from independent triangles to triangle strips (in the best case reduces number of indices by a factor of 3). I implemented client-side code to do this, but it is rather expensive so I disabled it by default.

An additional way to reduce GPU memory is to quantize vertices, e.g. using 8 or 10 bits per component rather than 32 bits per component.

Draco already does convert to triangle strips and quantizes vertices, but by default the decoder converts back to floating point vertices and independent non-strip triangles. It would be much better to skip that step of the decoding process and use the triangle strips and quantized vertices directly.

@jbms
Copy link
Collaborator

jbms commented May 18, 2019

Thanks for your work on getting this started. There is now support for a new multi-resolution precomputed mesh format in the master branch, which also supports (in fact requires) draco encoding of the mesh data.

There is also support for "sharded" storage which you may be able to use, or adapt, to greatly reduce the number of separate files that must be stored, if you compute meshes on a per-block basis and your segment ids are also per-block.

@jbms jbms closed this Oct 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants