Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accessing mesh data as a user #7

Open
tedsta opened this issue Aug 17, 2020 · 14 comments
Open

Accessing mesh data as a user #7

tedsta opened this issue Aug 17, 2020 · 14 comments

Comments

@tedsta
Copy link
Contributor

tedsta commented Aug 17, 2020

Right now unless I'm missing something there doesn't seem to be a way to get at vertex / index data from a RenderSystem - its TriangleScene field would provide this but the field is private.

Also there doesn't seem to be a way to selectively load nodes from a glTF file - they all get loaded directly into the scene.

I was thinking it could be nice to load scenes / meshes into a stand-alone object that stores all the data that could later be added to the scene. Would probably still want it to be tied to a resource manager so that things could be cached since most material / mesh data is immutable. But that resource manager can be decoupled from the rest of the RenderSystem.

The reason for this is I want to write some physx integration. I guess in most real-world use-cases you'd probably load a separate, lower poly mesh for e.g. the terrain's triangle mesh. In which case maybe I should just write a separate glTF loader that just loads the mesh data for physics.

Curious to hear your thoughts!

@meirbon
Copy link
Owner

meirbon commented Aug 17, 2020

I fully agree. My C++ project was mostly to fiddle around with Nvidia's OptiX, the Vulkan RTX extensions, and the like. Therefor the architecture is currently relatively limited. I feel like this project could be much more but I am uncertain about how to extend it. I am good with rendering pipelines but don't have much experience with writing full game engines haha.

I have been writing a gfx backend in a private repo because I ran into some limitations, e.g. the GPU path tracer is extremely slow because wgpu's buffer mapping pipeline is extremely slow (it only works if everything can be asynchronous), gfx-rs does not have this problem. It also lets us optimize much more on specific usages. I think it would be best if we'd be able to set up some kind of modular rendering pipeline and thus also have a more easily extensible scene setup. This would be a bit like how Unity does it where you can create your own compute pipelines, shaders, etc.

I really like your ideas, I don't know how to approach them though so I am open to any suggestions. The reason the RenderSystem exists is to have an easy way of syncing the scene and the renderer. Perhaps we should divide this up into 3 separate systems? So a resource manager, a synchronize thingy (for lack of a better word), and the renderer itself?

@tedsta
Copy link
Contributor Author

tedsta commented Aug 18, 2020

Yeah I think this project has a lot of potential! What I really want in Rust is a modern version of Irrlicht, if you remember that library. Just a batteries-included renderer but nothing more - no physics engine or editor or scripting or any of that - integrations for those things can live elsewhere.

The gfx thing sounds neat. I think it'd be sad to give up the possibility of eventually running this on the web using wgpu. But faster raytracing without HW support does sound very cool. The slowness of mapping might be something to ask about in the wgpu IRC, they are quite active there.

I can take a stab at "loading glTFs + objs into a standalone object that is later added to the scene" and we can see how you like it. I don't mind throwing work out if it's no good.

@meirbon
Copy link
Owner

meirbon commented Aug 18, 2020

I have heard about it but never actually used it. I'll have a look at its source code and see if I can get some inspiration from it.

The thing about gfx is that it is supposed to support webgpu as well. I believe wgpu-rs supports webgpu by using it as its backend via gfx. I wanted to see what it was about and would like to be able to use ray tracing APIs at some point as well and a lower level API has a higher chance of getting that access (I might even give it a shot myself just like dawn has a ray tracing fork). It's currently not the case, but I want to share shaders between the wgpu-rs and gfx renderer. The wgpu-rs backend isn't going away anytime soon. It's much easier to prototype things in wgpu-rs than gfx so even just for that it's great to have that backend.

Sounds good, I really appreciate the time you put into this project. Whatever you come up with would be great, then at least we have something to start with 😄

@tedsta
Copy link
Contributor Author

tedsta commented Aug 24, 2020

Oh cool I didn't know gfx had a wgpu backend, nice!

Just want to pop in and say I'm still working on this, still getting acquainted with all the internals. I'm wondering if there's a way to decouple animations from the NodeGraph system (like add it as a layer on top rather than having it be integrated inside). But for now I plan to leave it as is.

Some notes:

  • Channel::sampler_ids field seems unused?
  • scene::graph::animation::Sampler struct seems unused?
  • It looks like Node::weights field is unused? It is animated but I don't see the end result being used anywhere.

Here's what I'm thinking so far:

pub struct NodeDescriptor {
    pub name: String,
    pub child_nodes: Vec<NodeDescriptor>,

    pub translation: Vec3A,
    pub rotation: Quat,
    pub scale: Vec3A,

    pub meshes: Vec<NodeMeshDescriptor>,
    pub skin: Option<SkinDescriptor>,
    pub weights: Vec<f32>,
}

#[derive(Debug, Clone)]
pub enum NodeMeshDescriptor {
    Static(Mesh),
    Animated(AnimatedMesh),
}

#[derive(Debug, Clone)]
pub struct SkinDescriptor {
    pub name: String,
    pub inverse_bind_matrices: Vec<Mat4>,
}

#[derive(Debug, Clone)]
pub struct AnimationDescriptor {
    pub name: String,
    // (joint index, animation channel)
    pub channels: Vec<(u32, Channel)>,
}

When instantiating a NodeDescriptor, joint indices would be mapped to node IDs at that time. I would remove rfw_scene::graph::animation::Channel::node_id field and instead store Channels as a Vec<(u32, Channel)> where the u32 is the node_id (maybe make a NodeId struct for clarity? And AnimationId, SkindId, etc.?). This way Channel can be re-used as is in the *Descriptor objects.

Sound ok?

@meirbon
Copy link
Owner

meirbon commented Aug 24, 2020

To answer your questions:

  • Animations: I would like to be able to decouple animations from a node structure but I have no idea how to achieve that. Some nodes are actual scene nodes while others are supposedly bones and I don't know how to decouple them. It would be nice if we would be able to detect skins/skeletons but, again, I don't know how we would approach something like that.
  • Sampler: I might have forgotten about that. If it's not used, it can be removed.
  • Weights: These have to stay. They will be used for morph targets which have been implemented in my C++ project but not here yet. My C++ project applies animations on the CPU which is less performant but it makes things like morph targets easier to implement. I'll try and see if I can come up with a GPU-compatible solution for morph targets next weekend.

I think what you have here sounds good. I have done some work to defer loading/initializing scenes from gltf files. I think that is sort of what you're looking for. Basically files now result in either a single mesh or a scene. If the loaded file results in a scene, it is a NodeGraph that you add to the system by calling "add_scene". The scene now has a SceneGraph struct (should probably rename that, still a WIP) that contains a list of NodeGraphs.
This also means that indices in a gltf scene can easily be preserved which should make your descriptor approach easier as well, I think?

@tedsta
Copy link
Contributor Author

tedsta commented Aug 25, 2020

Regarding decoupling animations from node graph - I do like having the joints as nodes (how it is now) since then it is straight forward to e.g. attach a sword to the player's hand joint using the node graph, which the user will already be familiar with. But what would be nice I think if Node::skin, Node::weights, and maybe even Node::meshes lived somewhere else so that Nodes are just hierarchical objects with a transform an nothing else. I'll give it some thought to see if there is a nice way to do it but meh doesn't matter too much.

Nice just saw the multiple-node-graphs stuff. That is a nice idea. I think that does give me what I was originally looking for. Maybe we don't need the *Descriptors stuff if we can get all the features we need out of this?

The approach I was going to go for was to produce a SceneDescriptor that might look something like this:

struct SceneDescriptor {
    root_nodes: Vec<NodeDescriptor>,
    animations: Vec<AnimationDescriptor>,
}

And be able to load that into the single NodeGraph, or even selectively load just parts of it into the NodeGraph.

I understand it's WIP so maybe you have plans for these things but these are my initial thoughts:

  • Probably want to support having multiple active animations in a single NodeGraph? I could see having a glTF file that has separate animations for e.g. windmills or other mechanical structures in the background.
  • Can you move nodes from one NodeGraph to another? E.g. say I loaded a skinned object and wanted to attach it to a joint on the character's body, which is in a separate NodeGraph. It could get expensive translating node IDs between NodeGraphs if it's happening a lot. There is a nice simplicity to having only a single "address space" for node IDs. But I might be overestimating the cost. But maybe we can just say "nodes can't move between node graphs" and if you want to move nodes around the hierarchy you have to stick them in the same graph. Then maybe we have both "multiple node graphs" and "descriptors" as orthogonal features?

@meirbon
Copy link
Owner

meirbon commented Aug 25, 2020

I get what you mean. I had been thinking of removing instances all together and essentially just having nodes with the possibility of having instances (and thus a node ~= instance, it would make administration of which instances exist much harder though).

Yes, multiple animations should be supported because I think it's already a possibility according to the gLTF spec (not sure about that, I couldn't find an explicit explanation).

I think moving/copying nodes is relatively easy to do. The hard part is to update ID's correctly, we could also start storing nodes inside nodes such that we do not have to keep track of ID's anymore. I think both options are doable and I don't have a preference for either design. The only problem I see is that cloning NodeGraphs can become expensive because of the potentially huge amount of data in a node structure. Also, duplicating a node should allocate a new instance and thus a simple clone does not suffice (this is easy to solve though).

Overall, I think it would be best to not have a separate NodeGraph struct anymore and instead just have nodes. It would remove a layer of abstraction and make most of what we're trying to achieve easier I think. We'd just end up with a SceneGraph and a whole bunch of nodes that do not have references to each other. gLTF files would result in a few root nodes that you just add to the SceneGraph to render them. I'm open to suggestions though.

I'll see if I can make a branch this week where nodes have a list of nodes instead of ID's and see if it makes any sense haha

@tedsta
Copy link
Contributor Author

tedsta commented Aug 25, 2020

When you say remove the NodeGraph struct, do you mean just have TrackedStorage<Node> and still having nodes refer to their children by ID, or do you mean actually storing the data for children within the Node itself?

I still like having the ID approach, I think it will be hard to refer to non-root nodes if nodes store their children directy, e.g. child_nodes: Vec<Node>. In that case you'd probably have to have something like a struct NodePath(Vec<NodeChildIndex>); which would add a lot of indirection when referring to deeply nested nodes (and as a user I would want to refer to joint nodes).

@tedsta
Copy link
Contributor Author

tedsta commented Aug 25, 2020

Instances are just individual meshes to be rendered, right? And a node can have multiple instances associated with it? I kind of like the current separation of nodes and instances.

@meirbon
Copy link
Owner

meirbon commented Aug 25, 2020

Yes, I was thinking of having child nodes stored in a container in the nodes themselves, but you're right about the references. I hadn't thought about that...

True, it's easier to have instances as a separate thing. I sometimes just wonder whether it would be easier to just implement everything according to the gLTF specification and basically let users set up scenes in e.g. Blender, but I don't know whether that's a good idea.

@tedsta
Copy link
Contributor Author

tedsta commented Aug 26, 2020

I think what you have right now is quite close to gltf - list of meshes, list of skins, list of nodes, and things refer to each other by ID. TBH I quite like the current design - some small things could be moved around but the overall structure is good I think. I just want to be able to load scenes into a standalone object then load (perhaps only pieces of) that scene into the active scene. And I want to be able to re-arrange the node hierarchy freely. And I want to be able to manually create nodes and manually attach meshes to them.

Some more opinions + ideas:

  • I like that instances are separate from nodes - then renderer backends don't have to worry about the scene graph, they just handle rendering instances.
  • I think RenderSystem::scene could be public to allow easy access to materials, meshes, nodes, etc.
    • Maybe hide Instance concept from users? Users could just add and remove a MeshId from a NodeId, and instances would be manipulated automatically.
    • Probably make all the Scene fields pub(crate) instead of pub, only allow user access through methods.
  • Instead of wrapping every TrackedStorage<T> in a Mutex, I wonder if it could be more performant to queue up mutations to the scene in a set of lock-free command queues (separate queue for each command type, e.g. AddNodeCommand), then apply them all during synchronize. This way Scene would only be mutated from the thread doing the synchronizing. I doubt contention for the locks will ever be very high, though, so probably in the 99.9% case the atomic operation to acquire the lock will be faster than the atomic operation to queue the command + copying the command data to the queue's buffer in the heap. But it would remove all of the locking in the rfw_scene crate, which could be worth it. Would be fun to give this a shot and do a benchmark farther down the line.

I'll try to come up with a PR for the *Descriptor stuff on top of the multi-NodeGraph stuff you did by the end of this weekend. If it's no good no pressure we can toss it out - I just want to see how it'll turn out.

@meirbon
Copy link
Owner

meirbon commented Aug 29, 2020

I agree with all your points. The only reason I had to wrap it all in Mutexes is that I wanted to make it possible to load in assets asynchronously. I have been trying to come up with better solutions than what we currently have as it is not very ergonomic, I agree that a queue system might indeed be a good option.

@tedsta
Copy link
Contributor Author

tedsta commented Sep 23, 2020

Hey there, sorry it has been a while. I've been busy doing some DIY house stuff and moving into our new house. I do still plan to work on this. We finish moving this weekend, so hopefully I will have time next weekend.

The plan right now is to write an example of an animated character walking around a tri-mesh level using the new rapier physics engine. Maybe have a few convex hull meshes tumbling around too. Hopefully that will give me some ideas for how to polish the rfw API a bit more. I have started it but nothing to show yet.

@meirbon
Copy link
Owner

meirbon commented Sep 24, 2020

No problem, I've had some personal things as well and I need to start finishing my thesis... So that's why development has stalled a bit from my side as well.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants