-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EXT_meshopt_compression extension #1830
Conversation
Incorporate details about filters and address the review feedback regarding resampling.
In terms of compression ratio, here are the results of compressing data using this extension compared to using Draco:
These were generated as follows:
Using gltfpack as a pre-processor in addition to Draco is important to establish a level playing field, as gltfpack does a lot of high level scene processing as well. Results as a Google Spreadsheet: https://docs.google.com/spreadsheets/d/1V0jls9QSb7DRE3-JseCHRcnAzIAp-uSBssnEvx2dQxI/edit#gid=0 |
In terms of decompression time, this tends to vary a bit between the modes and filters used. Using two models as a test, Buggy.gltf from glTF-Sample-Models (300K triangles, 245K vertices) and Thai Buddha (https://sketchfab.com/3d-models/thai-buddha-cba029e262bd4f22a7ee4fcf064e22ee, 6M triangles, 3M vertices), we get the following timings on Chrome Canary on i7 8700K: Buggy.gltf: Thai Buddha: Based on the size and performance metrics, it looks like while Draco still is a viable alternative in cases when download size is critical and the content consists purely of elements that Draco supports well, this extension may provide a better balance and supports compression for all non-texture data that can be encoded in glTF. |
@donmccurdy @lexaknyazev JFYI I've resubmitted this with the new extension name and the enum changes we discussed on the previous PR. Also all numbers in this PR are now current (old PR had some numbers from earlier implementation that was less efficient). This PR should now be the central one, please let me know if you have further feedback! (also please let me know if you'd like me to add you to the contributor list, I don't know how that works) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The specification seems clear to me, and (not that this is required) I'm very happy with the performance and compression upsides here. I don't think I'm able to usefully review the bitstream, but I'm happy to merge this if others agree.
Aside, I don't think I'd realized the meshopt WASM decoder was only 6kb gzipped. That's easily the smallest useful WASM library I've ever seen, so thank you and great work on that.
Seems like a good plan. 👍 |
extensions/2.0/Vendor/EXT_meshopt_compression/schema/buffer.EXT_meshopt_compression.schema.json
Show resolved
Hide resolved
...ons/2.0/Vendor/EXT_meshopt_compression/schema/bufferView.EXT_meshopt_compression.schema.json
Show resolved
Hide resolved
...ons/2.0/Vendor/EXT_meshopt_compression/schema/bufferView.EXT_meshopt_compression.schema.json
Outdated
Show resolved
Hide resolved
@lexaknyazev Thanks a lot for the feedback, I believe I've addressed it (mostly tried to use one commit per comment for easier diff review) |
...ons/2.0/Vendor/EXT_meshopt_compression/schema/bufferView.EXT_meshopt_compression.schema.json
Outdated
Show resolved
Hide resolved
extensions/2.0/Vendor/EXT_meshopt_compression/schema/buffer.EXT_meshopt_compression.schema.json
Outdated
Show resolved
Hide resolved
...ons/2.0/Vendor/EXT_meshopt_compression/schema/bufferView.EXT_meshopt_compression.schema.json
Outdated
Show resolved
Hide resolved
@zeux |
@lexaknyazev No worries, thanks a lot for the help! |
Rendered version (Updated 10/2/2020)
This is an extension designed to reduce the transmission size of glTF files. The structure of this extension is very different from KHR_draco_mesh_compression, as it works on a per-bufferView basis instead of a per-mesh basis - with all of the remaining glTF schema in tact.
During loading of a compressed glTF file, loaders are expected to decompress the compressed bufferView data and then proceed with the loading as usual - this, for example, is compatible with decoding the buffer views directly into GPU-visible buffers.
Any type of data can be compressed; however, this extension isn't exactly a general purpose compressor. Instead, there are several algorithms tailored to different kinds of data that may be stored inside the buffer view - for all of the algorithms, they are designed to provide extremely fast decoding (on modern desktop CPUs decoding runs at ~2+ GB/s for native code and at ~1 GB/s in Wasm, using Wasm SIMD to accelerate parts of the processing). The decoders are implemented in meshoptimizer (https://github.com/zeux/meshoptimizer), and files compressed with this extension can be produced by gltfpack (https://github.com/zeux/meshoptimizer/tree/master/gltf) - or, of course, any other tool that is compliant with this specification.
For each bufferView, an appropriate compression technique must be picked to maximize the compression ratio of the data. The extension provides a compression mode for attribute data (suitable for mesh attributes, animation keys or values, instance transform components), triangle indices (suitable for mesh index data when using triangle list primitive) and indices (suitable for mesh index data for other primitives as well as sparse indices for general accessor storage). It's the job of the encoder to split the compressible data into bufferViews as necessary; additionally, for all compression modes preparing data for compression is important to get high compression ratios - this includes finding the optimal order of the data elements, quantizing them with KHR_mesh_quantization, and possibly other kinds of preprocessing such as animation data resampling.
Additionally, for attribute storage (for mesh/animation/etc.), the encoder may decide to use compression filters. These provide extra savings on top of attribute compression provided by this extension at the cost of extra precision loss; for example, instead of storing quantized quaternion values, filters allow storing only 3 components of a quaternion and reconstructing the quaternion from that - this can result in small precision loss. All filters are designed to be variable bit rate - that is, the encoder can pick the optimal number of bits used for data encoded by the filters, and fewer bits of data used will result in the attribute compression using fewer bytes to represent the data stream.
Unlike Draco or Basis ETC1 that have capable entropy coders embedded into the format, this extension doesn't employ Huffman or rANS or similar entropy coders; instead all algorithms are designed to reduce the data size as much as possible while still representing the data as a byte sequence; the expectation is that for maximum compression ratio a general-purpose compressor such as gzip (which is commonplace for asset delivery on the web) can be used as well. Noteworthy is that this extension acts as a pre-processor for gzip/et al in that by itself, gzip can't get anywhere close to the level of compression this extension provides; so the expectation is that this extension can compress data, gzip can compress the resulting compressed result further, but gzip or an equivalent compressor is optional.
For mesh data, this extension usually is slightly less efficient than Draco in terms of the transmission size. However, it's completely general (Draco glTF can't compress point clouds, non-triangle geometry and morph targets), and supports mesh, animation, instance data and general index/attribute compression with a simple specification.