-
Notifications
You must be signed in to change notification settings - Fork 211
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bind framebuffer color attachment in render pipeline #1828
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am generally fine with this, although with slight misgivings, but the change is small and not too hard to revert in the future.
- I am not quite sure this is a good move long term for webgpu / multi-target framebuffers etc where the choice of colorAttachment[0] may not always be correct.
- It seems not too onerous to ask applications to select the texture themselves
binding: framebuffer.colorAttachment[0]
? - Also I generally in 9.0 I have been avoiding the temptation to sneak in too many overloads into the low-level API, to keep it easy to reason about and maintain, especially as we now have two implementations (so this also needs to be added to webgpu path)
I think there is value at least in the short term to provide a back-compatible API. I've added a warning, prompting the user to directly pass What do you think about adding a new binding type of |
Well sure, I won't veto it if you feel it would be really helpful, but I admittedly remain a bit hesitant. While a Framebuffer is an API object in WebGL, which the app can create and manipulate, Framebuffer isn't even a thing in WebGPU. In WebGPU everything is specified and frozen during RenderPass creation. After some thought I decided to still expose a "mock" luma.gl core Framebuffer API, but mainly intended it as a "parameter package helper" for renderpass creation, since new render passes need to be created each frame. So I haven't really been thinking about doubling down on Framebuffer's as a first class citizen of the API. Adding to previous complications, a framebuffer can not only contain a bunch of bindings (up to 16 color render targets, a depth and stencil buffer), but in addition, each of these textures can have multiple depth and mip levels etc that one may need to specify during draw operations. (which in WebGPU is handled by creating Texture Views). So the proposed framebuffer support would only handle one (admittedly common) case. With all these things going on, building Framebuffer support into the API for one default case, just to avoid having apps write And what will a new luma.gl developer make of this? Will he/she understand that binding a framebuffer is just a convenience for framebuffer.colorAttachment[0], that depth and stencil buffers and other render targets are not being used. Or will this add more confusion to what Framebuffer and Textures actually are? Updating a couple of extensions in deck.gl is obviously not that hard, so I think we should decide on the basis of what is right for the API long term. |
I am going to merge this. I do agree backwards compatibility is valuable, and as long as it doesn't add much complexity I am open to it, including your proposal above. |
Background
The
WebGLRenderPipeline
has no support for framebuffers. This PR adds minimal support for handling the basic use case of binding the first color attachment of a framebuffer to a texture slot (in effect treating it as if were a texture). This change fixes theCollisionFilterExtension
in deck.glChange List
WEBGLFramebuffer
inWebGLRenderPipeline