-
Notifications
You must be signed in to change notification settings - Fork 1.1k
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposed converter and optimization pipeline #55
Comments
Also, the diagram doesn't show all the third-party libraries, e.g., image conversion, texture compression, mesh compression, etc. |
yes, COLLADA refinery is very useful there. at least the source code. I agree that the build of transforms should be collada-in / collada-out. On Fri, Apr 5, 2013 at 10:57 AM, Patrick Cozzi notifications@github.comwrote:
|
I know that code well, it's extremely dependent of the DOM and makes a lot of allocation/reallocations. glTF is not just about converting COLLADA, so having these functions separated (as they are in glTF) makes them reusable to convert FBX / OBJ... But yes, it's a plus if the COLLADA model come well formatted already... |
@RemiArnaud this assume that people will be willing to convert assets to COLLADA which is not always the case, or they might just miss an exporter... |
Ideally, a glTF lib separated from OpenCOLLADA will allow to centralise all these transforms. (and maybe in a C++ meshtool lib) |
that's my point FBX/OBJ/DAE -> COLLADA -> preprocess -> COLLADA -> gltF that's the minimum amount of work
On Fri, Apr 5, 2013 at 11:27 AM, Fabrice Robinet
|
If I am following correctly:
Is (3) the only thing we need to flush out? |
|
I'm OK with this not being a priority now (although I just see it getting harder as we write more code), but it would be good to converge on the direction, especially if it helps direct outside help. It sounds like you are suggesting that the JSON objects would be the DOM. I don't see how this is possible because we want to optimize much more than what is defined with the JSON objects, e.g., triangulation. |
Well, reading this again... We are just brainstorming here, but I don't see why working directly with the JSON objects would be a problem. It would give consistent objets/api wether you work on object that you just imported from say OpenCOLLADA or FBX, and even once you have a glTF asset if you want to optimize it more. |
Agreed, but the line is blurry. I see almost everything as an optimization or "conditioning." For example, triangulation is really an optimization because it adds very little overhead (just indices), and makes the model much easier to load at runtime. I would make the distinction that anything that can fit in the optimization pipeline should be part of the optimization pipeline to make it useful for the widest possible audience and still useful to glTF, of course. I'm still working through my notes, but I'll have a better idea of what is where soon, e.g., I'm on the fence about shaders right now, but it is probably glTF specific because of the metadata. Also, some optimization might need to happen very late in the pipeline, like, for example, vertex cache optimization requires deindexing, so the interop between the optimization pipeline and glTF converter needs to be good.
For triangulation, our JSON representation doesn't define a Give me a day or so to go over my notes on the potential stages, including what you have on the wiki, which should help this discussion. |
…sion-instance-features Add EXT_instance_features extension
We've done some mockup diagrams and had some discussions, but here is a proposed flushed-out architecture for the converter and optimization pipeline. Once we reach an agreement, I suggest making changes sooner, when they will be easier, rather than later.
The guiding principles are:
The text was updated successfully, but these errors were encountered: