Join GitHub today
GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.
Sign upLayerize Canvas #5679
Layerize Canvas #5679
Comments
|
@pcwalton About this: Could a shared gl texture between the compositor and the paint task be a good way to achieve this? Both contexts should share resources. I think it's simpler than passing around and copy to a OS surface (binding the texture as the drawing framebuffer color attachment would suffice), but I don't know if it would affect sandboxing though, probably the same way that sharing surfaces. |
|
Never ever use shared GL contexts. They cause the driver to serialize all commands, resulting in terrible performance. Native shared surfaces are the only viable option. |
|
Well, just to keep record of this:
So we have two options (I guess the first one is not feasible, but I want your opinion):
If I'm right, I should bind a surface to a texture, and later paint to that texture (just as I exposed previously, binding as a color attachment to the framebuffer), just like That seems pretty much more doable (and much more elegant) than what I initially thought :P |
|
This is mostly a shot in the dark that still doesn't work (It just compiles): dmarcos@2d08c77 I'm sending the native surface to the |
|
Well... You still send all the pixels from the WebGLPaintTask. It could be a good start though, but I think that locking every layer surface with a mutex and an atomic reference count won't be good for performance. I was thinking about doing it the other way around (encapsulating the NativeSurface inside the drawing buffer) and sending it (well, an id) instead of sending the pixels vector. I think it will be simplier (the compositor code won't be so bloated, and the creating/resizing logic will remain in the same site, I mean, we shouldn't ask a compositor for a surface in order to properly create the context and its bindings). |
|
I would give each canvas some sort of ID and have the layout data structures reference the canvas by ID. You will need to modify the |
|
@pcwalton I've got a few questions about this. Right now to paint a canvas an I don't know how making the canvas creating its own stacking context will affect the rendering, or how should that be changed once we have it working. Could you elaborate a bit about this? (I probably missed something, since I don't know so much about the infrastructure and I have digged in how it works just today) By the way, right now we should be able to attach a native surface to the WebGL rendering context with the changes I did a few days ago (this is the most relevant file), but I don't know how to test if the native surface part works. (I can check that the texture attachment works via Pixmap sharing should work fine, since we use the default X display, am I right? Thanks in advance :) |
|
Creating a stacking context for the canvas will break the spec, but Blink does the same thing. It likely won't make a difference on the Web. I'm not sure what you mean about testing to make sure that it works. You can use the native OS surface readback functions (for example, Pixmap sharing should work, and we use that for GPU rendering already. |
|
@pcwalton A few questions. I need a little bit more clarity to keep working on this:
|
|
@dmarcos (1) is an annoying case. Perhaps we can have layout keep the mapping from IDs to paint tasks (providing a solution for (2)). When a new paint task is created (because the 2D or 3D context was created), we could dirty the node and kick off a reflow, which will cause layout to update the ID for that paint task. |
|
Awaking this since I think I'll have the time these weeks to work on it. Now each canvas resides in it's own layer, and we have access to the canvas renderer from paint (#6083). I thought that cloning the servo/components/canvas/webgl_paint_task.rs Line 371 in a720886 @mrobinson kindly offered his help some time ago, maybe he can help me a little bit :) |
|
@emilio Is this bug still valid now that we have a WebRender-only Servo? |
|
I don't think so. We still need to make 2d canvas fast, but that should be another bug, and totally unrelated to this one. |
@ecoal95
To render canvas we now use a readback strategy where the canvas_paint_task and webgl_paint_task draw pixels in a buffer and the compositor requests them as needed. Both tasks should be able to write directly on a NativeSurface.