Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

legacythree2gltf: Add legacy JSON to glTF converter #15552

Closed
wants to merge 3 commits into from

Conversation

donmccurdy
Copy link
Collaborator

@donmccurdy donmccurdy commented Jan 9, 2019

Will probably need some tweaks and bug fixes but this seems like a good start. Fixes #15547.

Related PRs:

@donmccurdy
Copy link
Collaborator Author

If we want to have the converter, I think it's OK to review and merge this without waiting on #15011 (which shouldn't require any converter changes).

@titansoftime
Copy link
Contributor

Wanted to let you know. I used this converter on my models and had mostly a success. SkinnedMesh's work as well. Very nice.

Only issue I see is that for some reason, some files get quite a bit larger. For example I have a Castle model and the exact same model decimated (in Blender) by like 50% or so for mobile version. Oddly enough the converted decimated file is significantly larger than the converted non-decimated file. Very strange. Perhaps due to unnecessary exported attributes as some have reported? I can provide my files if wanted.

I used the GLTF-Pipeline tool to convert the gltf to glb afterwards. Works great, but weird file bloat remains.

So yea this mostly does the job, which is good because my animated json models do not work at all with the LegacyJSONLoader due to it handling bones completely differently (no initBones etc.) and I do not like being more than a couple versions behind the latest three build.

@donmccurdy
Copy link
Collaborator Author

Glad it's (almost) working!

There is an --optimize flag, are you still seeing the size increase when using that? That handles the only source of bloat I'm aware of. If you can share a JSON example that doesn't convert well I'll take a look.

Assuming the export is 1:1 (which isn't guaranteed, with the Geometry->BufferGeometry step involved) I'd expect GLB to be strictly smaller than JSON, with some exceptions for very tiny files and maybe morph targets.

@titansoftime
Copy link
Contributor

Indeed.

Yes I am using the --optimize flag. I will upload the json files when I get back home =]

@titansoftime
Copy link
Contributor

titansoftime commented Mar 19, 2019

So I maybe have an idea why the gltf/glb file size is not as reduced (or sometimes larger) than a json file.

The blender json exporter allows for adjustable float precision (I usually use 4-6 myself). However since buffergeometry uses float32arrays for attribute precision it's not really alterable and is locked into max precision (found this out while attempting to modify three2gltf.js, thought I was losing my mind).

So anyway, of course 4x the text length from extended floats are going to cause a size increase... right?

I considered just hacking away and modifying the precision via the stringified json prior to file write. Not sure if that would work, haven't tried yet.

Attached is an example json/glb/gltf zip.
58.zip

@donmccurdy
Copy link
Collaborator Author

donmccurdy commented Mar 19, 2019

It's not the vertex data that's larger - the Float32Array binary data is still going to be smaller than the JSON representation, even when the JSON only goes to 4 digits precision. If you split the file it's easier to see what's going on:

gltf-pipeline -i 58.gltf -o split/58.gltf -s
ls -lh
-rw-r--r--  1 donmccurdy  eng   678K Mar 19 12:52 58.bin
-rw-r--r--  1 donmccurdy  eng   1.1M Mar 19 12:52 58.gltf

In the file above, all of the vertex data is in the .bin file, and it's nice and compact at ~50% of the size of the original .json file. That remaining 1.1MB of JSON in the .gltf file is metadata identifying where each bone's animation comes from in the binary payload – while it's larger than you'd expect with 50 bones and animations on each, the good thing is that this metadata is very repetitive and GZIP-friendly, and the whole 1.1M blob zips down to just 60KB.

Comparing final results:

file original size gzipped
58.json 1.3MB 0.4MB
58.gltf 1.4MB 0.45MB
58.glb + Draco 0.73MB 0.13MB ⚡️

To create the compressed .glb I used gltf-pipeline, there are various compression settings if you need them. For some reason the .glb in the folder you shared seems to be invalid, and that may be an exporter bug we should look into.

gltf-pipeline -i 58.gltf -o 58_compressed.glb -d

@titansoftime
Copy link
Contributor

Ah ok I understand.

I use the gltf-pipeline as well.

Hmm, will have to look into why that that glb is invalid, I've tested several using this method and they are ok. Will look into.

Draco is fantastic at compression but I have found that the decompress time outweighs the advantage, especially for files that are already cached in the browser. Super impressive tech nonetheless.

@donmccurdy
Copy link
Collaborator Author

donmccurdy commented Mar 19, 2019

Hmm, will have to look into why that that glb is invalid, I've tested several using this method and they are ok. Will look into.

I have a custom script I'm using to inspect file size, so it could be that I have a bug in my script too, rather than the file being bad. It does open OK for me.

Draco is fantastic at compression but I have found that the decompress time outweighs the advantage, especially for files that are already cached in the browser

Can't remember if I mentioned this before but just in case, if you're loading multiple models in parallel then the version of DRACOLoader in #15249 should get you a big decoding speed improvement using workers. But yeah, I agree it depends on the use case whether that compression is worthwhile.

EDIT: well, I should benchmark this. in theory it should be faster but loading the decoder is not optimized yet. I only checked that it doesn’t block rendering while decoding so far.

@Mugen87
Copy link
Collaborator

Mugen87 commented Nov 27, 2019

Closing due to #17976. Still thanks for this code!

@Mugen87 Mugen87 closed this Nov 27, 2019
@donmccurdy donmccurdy deleted the feat-legacythree-converter branch November 27, 2019 19:00
@donmccurdy
Copy link
Collaborator Author

Sounds good! I've copied this over to a Gist that uses the three npm package rather than the local file structure: https://gist.github.com/donmccurdy/c090dc53c7bfb704ef9de654ecc07632

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Consider providing a converter for old JSON models
3 participants