Implement basic rendering flow #1322
Open
+1,204
−139
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Rendering
Closes #1271
Processing has a rendering lifecycle for each
PGraphicsinstance:beginDraw: called at beginning of frame or manually by the user to initialize the drawing state for a given surface. For us, this doesn't do much at the moment.flush: this lifecyle hook was added for OpenGL in order to flush accumulated state to the gpu. We'll use this similarly to render the currently accumulated draw state. TBD if the appropriate flushes are added everywhere we need them relative to what OpenGL does.endDraw: called at the end of frame or manually by the user to write to the surface/render target.Bevy's rendering loop
Bevy uses pipelined rendering. Specifically, there is an ECS "main" world and "render" world that run on separate timelines. We've currently disabled ECS multi-threading to ensure comparability, which means that the render world schedule runs after the main world serially, but conceptually they should be understood as independent. This isn't relevant for reviewing this PR but is an important implementation detail to know in general.
A
Camerain Bevy represents a coarse grained unit of rendering work tied to an output render target. In other words,Camera==PGraphicsandRenderTarget==PSurfacefor our concerns. Bevy really want to batch, both cameras and render state internal to cameras (i.e. opaque render items). On the contrary, processing really wants to force immediate mode.In Bevy, every camera renders to a
ViewTarget, which is an intermediate texture that accumulates rendering state while walking the render graph. At the end of the render graph, that texture is blitted to theRenderTargetwhich will be presented at the end of the frame.This is important for understanding how clear state and
ClearColorConfigworks.... Theclear_colorfield onCameracontrols the clear for the internal rendering texture. Right now we are setting this with background color, but it should basically always be load in order to preserve the sketch across flushes. Theoutput_modefield containsCameraOutputMode, which controls whether that internal texture is written to the render target at the end of the processing the camera. For us, we only want to setCameraOutputMode::WriteonendDraw, which means we setSkipby default to preserve immediate mode semantics.Bridging immediate mode and batching
This PR faithfully implements the processing lifecycle, which requires a bit of juggling to ensure that we are only ever processing one
Cameraat a time. This will be important for preserving backwards compatibility with #1320, although we may be able to do additional optimization in the future.Every time the user calls a processing API that needs to update the draw state, we record a
DrawCommandthat is stored in a buffer on the surface (typically the Window entity, this PR does not yet support off-screen rendering).At the start of every
Appupdate, we ensure that all cameras are disabled by settingactive = false, this ensures that regardless of the configuredoutput_mode, that camera will not be processed for rendering. In Bevy terms, this guarantees the camera is not extracted to the render world.When
flushis called, we insert a market componentFlushingon the surface that we intend to flush. First, we clear any previous meshes associated with this surface. Then, we ensure the camera is set to active and thus we process only that camera.We then drain the command buffer for that camera and render into a new set of meshes. We are using the
lyonlibrary to handle tessellation. See that libraries docs to understand how it works.We currently implement some very simple batching logic meant to preserve the painter's algo used by Processing. Basically, we'll continue rendering into what I affectionately call a "mega mesh" that contains all the primitives (i.e. vertex data) for items that share the same material state. In this way, vertex order preserves the imperative draw order. We also apply a small z offset to ensure that when breaking up batches, we still draw in the correct order.
Right now, we don't really support much material configuration besides setting the alpha mode. This may change in the future when we enable users to write their own materials.
Entity hierarchy
We're currently using the following entity hierarchy to help keep track of state per surface:
(Window, CommandBuffer): This is our "root" entity and the entity id that the Java side stores.Camera: The camera configured for this surface which points to the surface as render target. This is what "sees" everything that we render for a givenPGraphicsas controlled by theRenderLayerswe insert on both camera and mesh.TransientMeshthese (potentially multiple children) are meshes that we render into for thisPGraphics. Right now we clear them at the beginning of each draw cycle.Testing
You can run the new
WebGPU.javaexample to easily test.Minor changes
imports_granularityandgroup_importsto make the imports clean.