-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Accessing mesh data as a user #7
Comments
I fully agree. My C++ project was mostly to fiddle around with Nvidia's OptiX, the Vulkan RTX extensions, and the like. Therefor the architecture is currently relatively limited. I feel like this project could be much more but I am uncertain about how to extend it. I am good with rendering pipelines but don't have much experience with writing full game engines haha. I have been writing a gfx backend in a private repo because I ran into some limitations, e.g. the GPU path tracer is extremely slow because wgpu's buffer mapping pipeline is extremely slow (it only works if everything can be asynchronous), gfx-rs does not have this problem. It also lets us optimize much more on specific usages. I think it would be best if we'd be able to set up some kind of modular rendering pipeline and thus also have a more easily extensible scene setup. This would be a bit like how Unity does it where you can create your own compute pipelines, shaders, etc. I really like your ideas, I don't know how to approach them though so I am open to any suggestions. The reason the RenderSystem exists is to have an easy way of syncing the scene and the renderer. Perhaps we should divide this up into 3 separate systems? So a resource manager, a synchronize thingy (for lack of a better word), and the renderer itself? |
Yeah I think this project has a lot of potential! What I really want in Rust is a modern version of Irrlicht, if you remember that library. Just a batteries-included renderer but nothing more - no physics engine or editor or scripting or any of that - integrations for those things can live elsewhere. The gfx thing sounds neat. I think it'd be sad to give up the possibility of eventually running this on the web using wgpu. But faster raytracing without HW support does sound very cool. The slowness of mapping might be something to ask about in the wgpu IRC, they are quite active there. I can take a stab at "loading glTFs + objs into a standalone object that is later added to the scene" and we can see how you like it. I don't mind throwing work out if it's no good. |
I have heard about it but never actually used it. I'll have a look at its source code and see if I can get some inspiration from it. The thing about gfx is that it is supposed to support webgpu as well. I believe wgpu-rs supports webgpu by using it as its backend via gfx. I wanted to see what it was about and would like to be able to use ray tracing APIs at some point as well and a lower level API has a higher chance of getting that access (I might even give it a shot myself just like dawn has a ray tracing fork). It's currently not the case, but I want to share shaders between the wgpu-rs and gfx renderer. The wgpu-rs backend isn't going away anytime soon. It's much easier to prototype things in wgpu-rs than gfx so even just for that it's great to have that backend. Sounds good, I really appreciate the time you put into this project. Whatever you come up with would be great, then at least we have something to start with 😄 |
Oh cool I didn't know gfx had a wgpu backend, nice! Just want to pop in and say I'm still working on this, still getting acquainted with all the internals. I'm wondering if there's a way to decouple animations from the Some notes:
Here's what I'm thinking so far: pub struct NodeDescriptor {
pub name: String,
pub child_nodes: Vec<NodeDescriptor>,
pub translation: Vec3A,
pub rotation: Quat,
pub scale: Vec3A,
pub meshes: Vec<NodeMeshDescriptor>,
pub skin: Option<SkinDescriptor>,
pub weights: Vec<f32>,
}
#[derive(Debug, Clone)]
pub enum NodeMeshDescriptor {
Static(Mesh),
Animated(AnimatedMesh),
}
#[derive(Debug, Clone)]
pub struct SkinDescriptor {
pub name: String,
pub inverse_bind_matrices: Vec<Mat4>,
}
#[derive(Debug, Clone)]
pub struct AnimationDescriptor {
pub name: String,
// (joint index, animation channel)
pub channels: Vec<(u32, Channel)>,
} When instantiating a NodeDescriptor, joint indices would be mapped to node IDs at that time. I would remove Sound ok? |
To answer your questions:
I think what you have here sounds good. I have done some work to defer loading/initializing scenes from gltf files. I think that is sort of what you're looking for. Basically files now result in either a single mesh or a scene. If the loaded file results in a scene, it is a NodeGraph that you add to the system by calling "add_scene". The scene now has a SceneGraph struct (should probably rename that, still a WIP) that contains a list of NodeGraphs. |
Regarding decoupling animations from node graph - I do like having the joints as nodes (how it is now) since then it is straight forward to e.g. attach a sword to the player's hand joint using the node graph, which the user will already be familiar with. But what would be nice I think if Nice just saw the multiple-node-graphs stuff. That is a nice idea. I think that does give me what I was originally looking for. Maybe we don't need the The approach I was going to go for was to produce a
And be able to load that into the single I understand it's WIP so maybe you have plans for these things but these are my initial thoughts:
|
I get what you mean. I had been thinking of removing instances all together and essentially just having nodes with the possibility of having instances (and thus a node ~= instance, it would make administration of which instances exist much harder though). Yes, multiple animations should be supported because I think it's already a possibility according to the gLTF spec (not sure about that, I couldn't find an explicit explanation). I think moving/copying nodes is relatively easy to do. The hard part is to update ID's correctly, we could also start storing nodes inside nodes such that we do not have to keep track of ID's anymore. I think both options are doable and I don't have a preference for either design. The only problem I see is that cloning NodeGraphs can become expensive because of the potentially huge amount of data in a node structure. Also, duplicating a node should allocate a new instance and thus a simple clone does not suffice (this is easy to solve though). Overall, I think it would be best to not have a separate NodeGraph struct anymore and instead just have nodes. It would remove a layer of abstraction and make most of what we're trying to achieve easier I think. We'd just end up with a SceneGraph and a whole bunch of nodes that do not have references to each other. gLTF files would result in a few root nodes that you just add to the SceneGraph to render them. I'm open to suggestions though. I'll see if I can make a branch this week where nodes have a list of nodes instead of ID's and see if it makes any sense haha |
When you say remove the I still like having the ID approach, I think it will be hard to refer to non-root nodes if nodes store their children directy, e.g. |
Instances are just individual meshes to be rendered, right? And a node can have multiple instances associated with it? I kind of like the current separation of nodes and instances. |
Yes, I was thinking of having child nodes stored in a container in the nodes themselves, but you're right about the references. I hadn't thought about that... True, it's easier to have instances as a separate thing. I sometimes just wonder whether it would be easier to just implement everything according to the gLTF specification and basically let users set up scenes in e.g. Blender, but I don't know whether that's a good idea. |
I think what you have right now is quite close to gltf - list of meshes, list of skins, list of nodes, and things refer to each other by ID. TBH I quite like the current design - some small things could be moved around but the overall structure is good I think. I just want to be able to load scenes into a standalone object then load (perhaps only pieces of) that scene into the active scene. And I want to be able to re-arrange the node hierarchy freely. And I want to be able to manually create nodes and manually attach meshes to them. Some more opinions + ideas:
I'll try to come up with a PR for the |
I agree with all your points. The only reason I had to wrap it all in Mutexes is that I wanted to make it possible to load in assets asynchronously. I have been trying to come up with better solutions than what we currently have as it is not very ergonomic, I agree that a queue system might indeed be a good option. |
Hey there, sorry it has been a while. I've been busy doing some DIY house stuff and moving into our new house. I do still plan to work on this. We finish moving this weekend, so hopefully I will have time next weekend. The plan right now is to write an example of an animated character walking around a tri-mesh level using the new rapier physics engine. Maybe have a few convex hull meshes tumbling around too. Hopefully that will give me some ideas for how to polish the rfw API a bit more. I have started it but nothing to show yet. |
No problem, I've had some personal things as well and I need to start finishing my thesis... So that's why development has stalled a bit from my side as well. |
Right now unless I'm missing something there doesn't seem to be a way to get at vertex / index data from a
RenderSystem
- itsTriangleScene
field would provide this but the field is private.Also there doesn't seem to be a way to selectively load nodes from a glTF file - they all get loaded directly into the scene.
I was thinking it could be nice to load scenes / meshes into a stand-alone object that stores all the data that could later be added to the scene. Would probably still want it to be tied to a resource manager so that things could be cached since most material / mesh data is immutable. But that resource manager can be decoupled from the rest of the
RenderSystem
.The reason for this is I want to write some
physx
integration. I guess in most real-world use-cases you'd probably load a separate, lower poly mesh for e.g. the terrain's triangle mesh. In which case maybe I should just write a separate glTF loader that just loads the mesh data for physics.Curious to hear your thoughts!
The text was updated successfully, but these errors were encountered: