Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Layerize Canvas #5679

Closed
dmarcos opened this issue Apr 14, 2015 · 13 comments
Closed

Layerize Canvas #5679

dmarcos opened this issue Apr 14, 2015 · 13 comments

Comments

@dmarcos
Copy link
Contributor

@dmarcos dmarcos commented Apr 14, 2015

@ecoal95
To render canvas we now use a readback strategy where the canvas_paint_task and webgl_paint_task draw pixels in a buffer and the compositor requests them as needed. Both tasks should be able to write directly on a NativeSurface.

@emilio
Copy link
Member

@emilio emilio commented May 5, 2015

@pcwalton About this: Could a shared gl texture between the compositor and the paint task be a good way to achieve this?

Both contexts should share resources. I think it's simpler than passing around and copy to a OS surface (binding the texture as the drawing framebuffer color attachment would suffice), but I don't know if it would affect sandboxing though, probably the same way that sharing surfaces.

@pcwalton
Copy link
Contributor

@pcwalton pcwalton commented May 5, 2015

Never ever use shared GL contexts. They cause the driver to serialize all commands, resulting in terrible performance. Native shared surfaces are the only viable option.

@jdm jdm added the A-gfx/rendering label May 6, 2015
@emilio
Copy link
Member

@emilio emilio commented May 6, 2015

Well, just to keep record of this:

  • Acording to this gecko wiki page in gecko they do use texture sharing.
  • They share the whole context pointer with the layer though (HTMLCanvasElement::GetCanvasLayer, WebGLContext::GetCanvasLayer, CopiableCanvasLayer::Initialize), They just lock the surfaces when appropiate.

So we have two options (I guess the first one is not feasible, but I want your opinion):

  • Storing the context in the heap with a box, sending it to the compositor (safely, that's what I think it's unfeasible), bind the GLContext -> Paint the texture -> unbind.
  • Shared surfaces: I was confused about this because I started rendering to a GLXPixmap via glxCreateContext, but that's not what I should do, right (resizing that is practically impossible)?

If I'm right, I should bind a surface to a texture, and later paint to that texture (just as I exposed previously, binding as a color attachment to the framebuffer), just like rust-layers does, right?

That seems pretty much more doable (and much more elegant) than what I initially thought :P

@dmarcos
Copy link
Contributor Author

@dmarcos dmarcos commented May 6, 2015

This is mostly a shot in the dark that still doesn't work (It just compiles): dmarcos@2d08c77

I'm sending the native surface to the webgl_paint_task. This patch requires making the native surface of the LayerBuffer an Arc<Mutex<NativeSurface>> What do you think?

@dmarcos dmarcos closed this May 6, 2015
@dmarcos dmarcos reopened this May 6, 2015
@emilio
Copy link
Member

@emilio emilio commented May 6, 2015

Well... You still send all the pixels from the WebGLPaintTask. It could be a good start though, but I think that locking every layer surface with a mutex and an atomic reference count won't be good for performance.

I was thinking about doing it the other way around (encapsulating the NativeSurface inside the drawing buffer) and sending it (well, an id) instead of sending the pixels vector. I think it will be simplier (the compositor code won't be so bloated, and the creating/resizing logic will remain in the same site, I mean, we shouldn't ask a compositor for a surface in order to properly create the context and its bindings).

@pcwalton
Copy link
Contributor

@pcwalton pcwalton commented May 6, 2015

I would give each canvas some sort of ID and have the layout data structures reference the canvas by ID. You will need to modify the layer field in StackingContext to allow layout to inform the paint task about this, I think. The paint task, I guess, should be able to use the ID to fetch the layer buffer from the WebGL task and then send it along to the compositor task when the compositor task sends a request for it. You'll also need to modify the compositor to have the concept of a non-tiled layer, since we don't want the compositor to try to split up WebGL layers into tiles.

@emilio
Copy link
Member

@emilio emilio commented May 10, 2015

@pcwalton I've got a few questions about this. Right now to paint a canvas an SpecificFragmentInfo::Canvas is generated by layout, that is "transformed" to an ImageDisplayItem to build the display list via getting the pixels with SendPixelContents.

I don't know how making the canvas creating its own stacking context will affect the rendering, or how should that be changed once we have it working. Could you elaborate a bit about this? (I probably missed something, since I don't know so much about the infrastructure and I have digged in how it works just today)

By the way, right now we should be able to attach a native surface to the WebGL rendering context with the changes I did a few days ago (this is the most relevant file), but I don't know how to test if the native surface part works. (I can check that the texture attachment works via clear_color + read_pixels, for example, but... How can I test that the NativeSurface binding is producing the desired results?).

Pixmap sharing should work fine, since we use the default X display, am I right?

Thanks in advance :)

@pcwalton
Copy link
Contributor

@pcwalton pcwalton commented May 11, 2015

Creating a stacking context for the canvas will break the spec, but Blink does the same thing. It likely won't make a difference on the Web.

I'm not sure what you mean about testing to make sure that it works. You can use the native OS surface readback functions (for example, IOSurfaceLock on Mac) if you want to check to make sure that it works.

Pixmap sharing should work, and we use that for GPU rendering already.

@dmarcos
Copy link
Contributor Author

@dmarcos dmarcos commented May 11, 2015

@pcwalton A few questions. I need a little bit more clarity to keep working on this:

  1. Canvas elements don't have a paint task by default. The rendering context (2d or webgl) is created when getContext is invoked by content. How do we deal with the case when there's no paint task to retrieve the LayerBuffer from?
  2. You mentioned that we should create an id for each canvas so we can retrieve the appropriate paint task. Since the paint task is not guaranteed to exist I guess we need to store a pointer to the canvas node somewhere so we can call get_renderer and get the paint_task. Where should the mapping between ids and canvas nodes live? In layout or gfx?
@pcwalton
Copy link
Contributor

@pcwalton pcwalton commented May 11, 2015

@dmarcos (1) is an annoying case. Perhaps we can have layout keep the mapping from IDs to paint tasks (providing a solution for (2)). When a new paint task is created (because the 2D or 3D context was created), we could dirty the node and kick off a reflow, which will cause layout to update the ID for that paint task.

@emilio
Copy link
Member

@emilio emilio commented Sep 29, 2015

Awaking this since I think I'll have the time these weeks to work on it.

Now each canvas resides in it's own layer, and we have access to the canvas renderer from paint (#6083).

I thought that cloning the NativeSurface from the WebGL task would be enough (servo/rust-layers#160,

fn send_native_surface(&self, _: Sender<NativeSurface>) {
), but I'm not totally sure.

@mrobinson kindly offered his help some time ago, maybe he can help me a little bit :)

@mrobinson
Copy link
Member

@mrobinson mrobinson commented Mar 22, 2017

@emilio Is this bug still valid now that we have a WebRender-only Servo?

@emilio
Copy link
Member

@emilio emilio commented Mar 22, 2017

I don't think so. We still need to make 2d canvas fast, but that should be another bug, and totally unrelated to this one.

@emilio emilio closed this Mar 22, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants
You can’t perform that action at this time.