-
-
Notifications
You must be signed in to change notification settings - Fork 744
Finalize renderer design #19
Comments
Something not accounted for in the diagram (draft 1.5) is the overhead of IR generation. It is assumed that for each layer, the equivalent IR is generated by looping through each element and building a render list (high level description of how to process an object/light/uniform). Depending on how many elements there are per-layer, this might be unnecessarily slow. Perhaps for each layer, there should be a work-stealing thread pool building these render lists in parallel? The number of threads in this pool is determined on startup based on the hardware resources available. |
That can be evaluated later on by just replacing |
I've been researching a ton of stuff about GFX, replay systems, the Rust language itself (I'm still learning, after all). I'm thinking about some more changes to the renderer design, possibly eliminating the need for the IR. The reason why I proposed an API-agnostic IR in the renderer was to allow quick and easy transmission of frames over a network (similar to RDP or X11), so we can support remote tool slaving. However, I've come to the conclusion that this is impractical for the following reasons:
Instead, I propose to eliminate the IR and for GFX command buffers to be generated directly by the frontend (I see you smiling @kvark!). Tool slaving will be handled by the engine and not by the renderer directly, like so:
Demo recording/replaying (similar to Quake demos or This method isn't perfect, and some jittering and stuttering is to be expected, especially if there's no lag compensation, etc. But it's better than what we've got! 😄 |
@ebkalderon this looks nice, but what's the use case of this master-slave system? |
@White-Oak It allows for people to preview and playtest their games directly on their target platforms (ideally mobile devices or consoles) without needing to deploy them by hand. You can modify your scripts or step through them in a debugger on your development machine and watch the output on your external devices. Any updates to your game's resource files will also propagate over the network to all the slave devices with little user interaction. At any point, you can drop the master-slave connection and hand over the slave devices with the current version of the game to the playtesters. Two AAA engines I know for sure that have this functionality (there may be more):
Recording and replaying entire game sessions from disk has numerous applications as well:
One of the project's goals is having a solid toolset and fostering rapid iteration times. Having such functionality available to the public in a freely available game engine would be kick ass! |
I would like to say something about the current renderer design. On February 9, 2016 4:48 PM, I reasoned that the backend and frontend should both be exposed to make implementing networked tool slaving easier. However, with my recent comment a few days ago about tool slaving being an engine-wide issue, I realize that my original proposal is no longer necessary. It's now possible for the frontend and backend to assume their correct levels of abstraction. These changes should be landing soon on the renderer branch. |
Renderer rewrite complete, and this issue has gone quite stale, so I'm closing it. |
Update to latest nalgebra
Though the main priority right now is to stabilize the entity-component-system API first (issue #10), we can finish designing the parallel renderer, with the internal restructuring of the engine into modular crates (issue #13).
As described on the relevant design document on the wiki, our aims are high throughput, data-driven design, optimization for next-gen APIs, demo recording and real-time playback, and network transparency, i.e. for tool slaving).
Please take a look at the drafted renderer design for reference. Feedback is welcome!
Progress will be worked on in the renderer branch.
The text was updated successfully, but these errors were encountered: