Join GitHub today
GitHub is home to over 40 million developers working together to host and review code, manage projects, and build software together.Sign up
legacythree2gltf: Add legacy JSON to glTF converter #15552
Wanted to let you know. I used this converter on my models and had mostly a success. SkinnedMesh's work as well. Very nice.
Only issue I see is that for some reason, some files get quite a bit larger. For example I have a Castle model and the exact same model decimated (in Blender) by like 50% or so for mobile version. Oddly enough the converted decimated file is significantly larger than the converted non-decimated file. Very strange. Perhaps due to unnecessary exported attributes as some have reported? I can provide my files if wanted.
I used the GLTF-Pipeline tool to convert the gltf to glb afterwards. Works great, but weird file bloat remains.
So yea this mostly does the job, which is good because my animated json models do not work at all with the LegacyJSONLoader due to it handling bones completely differently (no initBones etc.) and I do not like being more than a couple versions behind the latest three build.
Glad it's (almost) working!
There is an
Assuming the export is 1:1 (which isn't guaranteed, with the Geometry->BufferGeometry step involved) I'd expect GLB to be strictly smaller than JSON, with some exceptions for very tiny files and maybe morph targets.
So I maybe have an idea why the gltf/glb file size is not as reduced (or sometimes larger) than a json file.
The blender json exporter allows for adjustable float precision (I usually use 4-6 myself). However since buffergeometry uses float32arrays for attribute precision it's not really alterable and is locked into max precision (found this out while attempting to modify three2gltf.js, thought I was losing my mind).
So anyway, of course 4x the text length from extended floats are going to cause a size increase... right?
I considered just hacking away and modifying the precision via the stringified json prior to file write. Not sure if that would work, haven't tried yet.
Attached is an example json/glb/gltf zip.
It's not the vertex data that's larger - the Float32Array binary data is still going to be smaller than the JSON representation, even when the JSON only goes to 4 digits precision. If you split the file it's easier to see what's going on:
In the file above, all of the vertex data is in the
Comparing final results:
To create the compressed
Ah ok I understand.
I use the gltf-pipeline as well.
Hmm, will have to look into why that that glb is invalid, I've tested several using this method and they are ok. Will look into.
Draco is fantastic at compression but I have found that the decompress time outweighs the advantage, especially for files that are already cached in the browser. Super impressive tech nonetheless.
I have a custom script I'm using to inspect file size, so it could be that I have a bug in my script too, rather than the file being bad. It does open OK for me.
Can't remember if I mentioned this before but just in case, if you're loading multiple models in parallel then the version of DRACOLoader in #15249 should get you a big decoding speed improvement using workers. But yeah, I agree it depends on the use case whether that compression is worthwhile.
EDIT: well, I should benchmark this. in theory it should be faster but loading the decoder is not optimized yet. I only checked that it doesn’t block rendering while decoding so far.