-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Optimized 2CylinderEngine and added all variants #4
Optimized 2CylinderEngine and added all variants #4
Conversation
|
I agree, will make this change.
This was an offline discussion that we had in September I think. The justification was readability and usability. The separate sample models are much more readable for exploring and understanding the structure of glTF, while the embedded ones are more usable since they are contained in a single file and have no external dependencies.
Absolutely |
Given that the base glTF format has the readable, isn't there diminishing having separate/embedded for each variation? I think it is fine for the base format and binary format because it helps engines cover all their cases, e.g., external references in binary glTF (which there was an issue with in three.js before we had sample models). I'm just trying to understand if having two versions of glTF-KHR_materials_common and glTF-WEB3D_quantized_attributes is worth the burden. It feels like no. |
Although I might overlook or underestimate some justification for this, I agree that creating "all variants" can be taken arbitrarily far. A devil's advocate could consider this a first step on the road towards a combinatorial explosion: One can embed Buffers, Shaders, and/or Images, in 8 combinations, each of them combined with binary/commonMaterials/quantization, leading to 64 variants... But seriously: There certainly should be test cases for "mixed type assets", like
But I think that these are rather conceptual tests that could all be covered with a single model (e.g. the textured box). |
@lasalvavida what did you think of #4 (comment)? @javagl seems to agree, #4 (comment) |
I understand the concerns of a slippery slope of having too many variants of models, so let me completely explain the justification for what I have here and we'll work from there. I know it's a bit long; thank you for reading through it. I think at the bare minimum, the sample models should have a single variant for each supported extension, which enables this repo to be used easily for testing extension support. I hope we can agree on that. The separate/embedded question is a bit more complex. The embedded models are more convenient (drag-and-drop a single file). However most text editors struggle to read them because of the enormous single line base64 encoded uris. For example, the embedded BrainStem model is almost unreadable in atom. For someone wanting to learn about glTF and explore how the models are implemented, a separate variant seems like it should be a must. So then, for the examples of implemented extensions, what do we do? We could keep doing what we have been As a result, I arrive at the conclusion that the best arrangement of the sample models is to include a variant of each extension, and for each of those, a separate and embedded variant. The embedded variant can be easily dragged-and-dropped and is more convenient for use. The separate variant is more human readable for exploring the spec. Does that make sense? |
It sounds reasonable, and limiting the dimensions of the variants to "Extension X Embedding" will limit the slipperiness of the slope.
Of course, having many valid sample models may help to increase the test coverage, broaden the loader support, help implementors to quickly test their tools with all these variants and quickly check whether a particular model works well with a specific extension. In any case: For those who contribute models, it will be crucial to have an easy way to create all variants of one model (as Patrick already pointed out). So I do not want to disagree with you, and do not want to propose anything particular here, but am once more playing devil's advocate: If such an easy way to generate all variants exists, then the importance of really having all variants of all models in the repo may be lower. Everybody could generate all desired variants quickly, easily and locally. The following may sound like arguments for or against something, but are really just points that I thought about in this context:
|
We do. I've been using the gltf-pipeline project for generating all of these variants.
I'm actually not opposed to this as a solution, but it probably needs to be discussed more. We need to decide what this repository is supposed to be. It would certainly make maintenance/git compatibility better to only include the separated glTF asset and include instructions for generating the other variants with gltf-pipeline. |
This repo is sample models for the glTF community. It is not sample models meant to be directly referenced, e.g., git submoduled, into an engine for tests. An engine may use all or a selection of these models, plus some of their own, for testing. However, we want enough variants here that engines that can load them all have a very good chance of being fully conformant. @lasalvavida perhaps the right approach is to start with the script/instructions for using gltf-pipeline to generate all the variants, and then decide what models to include here? Your readable vs. convenience points, #4 (comment), are good. If models only had one variation, we would have to decide what that would be; if binary glTF is widely supported, which we would like, then that would likely have the best trade offs. |
Just found that VS Code handles such files w/o any issues (with coloring, folding, etc). |
There are several editors that can work with such long lines (e.g. https://www.textpad.com/ opens it smoothly as well), but the general statement is true: Many editors will bail out. I'm not sure how important this is, because the files are not "supposed to be edited". It should only be a way to transfer the whole data at once, to be directly loaded by a renderer. Apart from the fact that, for such large files, the overhead of the base64 encoding becomes an issue: For BrainStem, it's 15.4 MB vs. 11.4 for the |
I'm going to close this set of pull requests for now in favor of the current organization. |
No description provided.