Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Depth/stencil EGL image #133

Open
bl4ckb0ne opened this issue Aug 18, 2021 · 27 comments
Open

Depth/stencil EGL image #133

bl4ckb0ne opened this issue Aug 18, 2021 · 27 comments
Assignees

Comments

@bl4ckb0ne
Copy link
Contributor

I have a use case where I need to create an image from a dmabuf using
eglCreateImageKHR, then import it as a GL_DEPTH_COMPONENT/GL_DEPTH_STENCIL
texture using glEGLImageTargetTexture2DOES. It seems that the second step
assumes that the texture format is always GL_RGBA, which forces me to use a shader
to blit the depth of my framebuffer onto the second image

Here's the simplified code of my process

EGLint color_attribs[] = {
	// DMAbuf config
	EGL_LINUX_DRM_FOURCC_EXT, DRM_FORMAT_ARGB8888,
}

EGLImageKHR color_image = eglCreateImageKHR(display, EGL_NO_CONTEXT, 
		EGL_LINUX_DMA_BUF_EXT, NULL, color_attribs);

EGLint depth_attribs[] = {
	// DMAbuf config
	EGL_LINUX_DRM_FOURCC_EXT, DRM_FORMAT_R16,
}

EGLImageKHR depth_image = eglCreateImageKHR(display, EGL_NO_CONTEXT, 
		EGL_LINUX_DMA_BUF_EXT, NULL, depth_attribs);
		

glBindTexture(GL_TEXTURE_2D, color_tex);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, color_image);

glBindTexture(GL_TEXTURE_2D, depth);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, width, height, 0,
 		GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, NULL);

glBindTexture(GL_TEXTURE_2D, depth_tex);
glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, depth_image);

                                     
glBindFramebuffer(GL_FRAMEBUFFER, fbo);                             
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,                
		GL_TEXTURE_2D, color_tex, 0);                               
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1,                
		GL_TEXTURE_2D, depth_tex, 0);                               
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT,                 
		GL_TEXTURE_2D, depth, 0);

This feels a bit heavy. Would there be a way to use the EGLImage as a depth
attachment directly? Maybe add a fourcc format to express depth/stencil and have
the driver translate the format into GL format, or add a parameter to one of the
function so let the user specify the desired GL format regardless of the dmabuf
format?

I'd be happy to help and contribute to a simpler solution.

@stonesthrow
Copy link
Contributor

I don't think there is an extension that targets a depth or stencil attachment from an EGL Image.
Interesting idea.
Let me raise this idea with the OpenGL/ES and EGL WGs get some feedback.

@stonesthrow
Copy link
Contributor

There doesn't seem to be a drm_fourcc.h enum for a DEPTH format, so that is another hurdle to jump.

@bl4ckb0ne
Copy link
Contributor Author

From what I understand, it's because nobody needed it before. I don't know if it would be accepted, maybe if we show a strong enough use case.

Otherwise we could use the fourcc bpp to match the GL internal format. Like DRM_FORMAT_RGB888 is 24 bpp and would match GL_DEPTH_COMPONENT24.

@stonesthrow
Copy link
Contributor

Do you have a use case for using a pre-created depth buffer? this would justify the need.

@bl4ckb0ne
Copy link
Contributor Author

The use case is wxrc, a XR wayland compositor. Clients are rendering 3d geometry on their own given space, and send their color and depth buffer to the compositor to be rendered apropriately on the HMD.

Here's a screenshot of a prototype. The 2 ducks are rendered on their own framebuffer.

2021-08-06_14:16:14

The code of the prototype is available here. The logic is the same, just without EGLImage/dmabuf.

@stonesthrow
Copy link
Contributor

Seems like something unique for Wayland. Question: why is the color and depth sent to the compositor? Usually the 3D image is rendered by app, and that image is sent to compositor. Why would a compositor apply a depth to a frame? Usually a compositor uses depth for z-order of all visible frames. What about compositors that are not 3D engines?

@bl4ckb0ne
Copy link
Contributor Author

This isnt unique to wayland, it can be useful in any situation where two processes want to share depth buffers. wxrc uses wayland, but another VR compositor may use something else.
Depth needs to be shared only for VR clients, where 3d geometry used and the client needs to be blended with the other VR clients in the VR space. We can also use regular 2d surfaces (like xdg-shell) into the VR space, but they dont send depth buffers, only color buffers. Only VR wayland client will send depth buffers.

@cubanismo
Copy link
Contributor

Note we can't simply alias RGBA8888 with D24S8. They're very different formats at the HW level. If you want to share depth buffers, don't use dma-buf/EGLImage, use Vulkan + Vulkan/GL interop, where appropriate barriers are defined & required to ensure depth buffer data can be handed off between two users correctly.

More generally, I've been told several times that more design work is needed before dma-buf + DRM format modifiers can be used to share non-color buffers in a cross-vendor fashion (And I agree), so I wouldn't want EGL to wade into that before the upstream ecosystem has weighed in with a consensus solution at that level.

@bl4ckb0ne
Copy link
Contributor Author

What makes depth buffer so different from color buffers? Sharing depth over color buffer works, why not simplify the process?
Could you elaborate on the design work and link the relevant discussions please?

@ddevault
Copy link

(I'm working on this with @bl4ckb0ne) To clarify: we're probably willing/able to do any necessary prerequisite work if a broader approach is required.

@emersion
Copy link

@cubanismo, do you suggest using Vulkan because Vulkan has the external_memory + external_memory_fd extensions which allow processes to share resources as long as the driver is the same?

@cubanismo
Copy link
Contributor

Yeah, it's going to have to be the same hardware or type of HW at least, unless you're using a non-tiled depth buffer which NV hardware hasn't supported in years, but we could blit it like the sample does here. Driver may matter less. However, I suggest it more because of the robust barriers defined for sharing resources across queues, processes, and devices in Vulkan, which was introduced into GL with the VK/GL interop extensions, but isn't directly applicable to EGLImages without further extensions.

For OSS stuff, I don't know what other vendors need, but our existing modifiers don't contain enough state to define a depth layout from what I recall, because my understanding was they didn't have to yet. If they now do, that's fine, but we'll have to work through it or fall back on the less-flexible implicit kernel-side data path. I suspect we aren't the only vendor that does more special stuff with depth buffers than color buffers.

However, it would save a lot of work to just use the existing Vulkan stuff, with Vulkan serving as the allocator and GL as the actual renderer and/or consumer since that's all already defined & widely implemented.

@ddevault
Copy link

I would quite like to avoid involving Vulkan in this, since it would directly translate to more work for the users of the API, doubly so when considering that this kind of resource sharing would be without precedent in Wayland. It seems odd that depth buffers would be treated as a special class among buffers, and I would prefer to find a way to mitigate that problem at its source rather than to look to Vulkan for a solution.

@stonesthrow
Copy link
Contributor

Note, the GLES WG meets again Sept 1. So there is a bit of a wait before the WG has a open discussion. Please continue here.

@stonesthrow
Copy link
Contributor

My apologies, because of technical issue (no internet, no conference), I was not able to introduce this issue to the GLES work group. We will schedule for next meeting. I am asking for feedback that may provide a more optimized solution for now, and to consider a new extension to import EGLImages of depth type into a depth attachment. I hope there will be feedback here as to pros and cons, and suggestions .

@ddevault
Copy link

ddevault commented Sep 1, 2021

No worries. Thanks for the update!

@bl4ckb0ne
Copy link
Contributor Author

No trouble, the next meeting will be on the 15th right?

@stonesthrow
Copy link
Contributor

Probably. Depends on work load. Could be next week. Generally 2 weeks

@stonesthrow
Copy link
Contributor

Barring details that inhibit it, I would expect you would need 1. EGL to recognize a dmabuf as type depth - based on format. Needs to be identified as depth/stencil not a 2d image for eglCreateImage. 2. a new extension with something like glEGLImageTargetTextureDepth/Stencil() instead of TargetTexture2DOES(), to import depth into a depth texture. The 3rd step use glFramebufferTexture to attach texture as depth. Does that seem right? I'm sure I'm missing details. But I'd like to get feedback to be sure this is the way to go, and not missing a detail.

@bl4ckb0ne
Copy link
Contributor Author

bl4ckb0ne commented Sep 1, 2021

  1. Absolutely, that would require to add depth/stencil formats to fourcc. I am going to take care of that.
  2. Either what you said, being able to specify the format and internalFormat to glEGLImageTargetTexture2DOES like glTexImage2D does, or extract the data from the dmabuf under the hood. That might require a dependency to the depth texture extension.
  3. Yes, the texture should be able to be attached as GL_DEPTH_ATTACHMENT or GL_STENCIL_ATTACHMENT

@stonesthrow
Copy link
Contributor

This issue was discussed in OpenGLES WG conference call today.
Question: How is the Depth buffer created? GLES, how extracted?

  1. There is some concerns that a DEPTH_ATTACHMENT is a GPU resource and just importing from Images may have issues. But that may be per driver. So this could be "tricky" for driver to implement.
  2. It was thought that extension https://www.khronos.org/registry/OpenGL/extensions/EXT/EXT_EGL_image_storage.txt was meant to be more flexible than https://www.khronos.org/registry/OpenGL/extensions/OES/OES_EGL_image.txt, and could handle more types. - However, a quick read seems this extension is also limited to textures.
    A proposal could be made for an extension EGLImage to DEPTH_ATTACHMENT, however, it seems the mechanisms to do that needs to be investigated for technical barriers/driver/GPU design. That information wasn't readily available for discussion.
    Follow up would probably need a GLES issue with request for a extension to build on one of the two extensions mentioned above for this scenario.
    As for EGL, the https://www.khronos.org/registry/EGL/extensions/EXT/EGL_EXT_image_dma_buf_import.txt could be built on to include recognizing a "depth" format.
    Not as definitive an answer as we were hoping.

@ddevault
Copy link

Well, it is a start. I think a reasonable next step would be to sketch up some of the necessary extensions and prove their design in principle by writing up an implementation for Mesa. We'll start with EGL and then move on to GLES, and return here with a summary of what changes are required for Mesa - giving some tangible code to look at for other driver implementors.

For what it's worth, if we can get this working in Mesa then we don't really care of proprietary driver vendors can get it working or not in their own drivers. We have no care for proprietary drivers whatsoever. If this means we'll end up namespacing the extensions into MESA rather than as a general-purpose extension, then so be it.

@stonesthrow
Copy link
Contributor

If you can get it in to the drivers you need as a MESA extension good. If you need to get the main desktop drivers we'll need to get them to buy-in and try for a EXT.

@ddevault
Copy link

Mesa is the main desktop drivers 😉

@stonesthrow
Copy link
Contributor

@bl4ckb0ne , Simon. we never got traction on this. Maybe the solution from James worked. You have probably moved on. One idea just popped into my head. maybe you already resolved it by now.
What if instead of a depth buffer passed from application to compositor - pass the color buffer, a point(xyz) for the top-left/bottom-left corner of the window in 3-D space, and a vector that represents the x-axis, and a vector that represents the y-axis, in 3D space. From that the compositor can construct a 3-D space for all your windows. This supposing that your windows are just 2 flat triangles. Just an alternative.
We can close this issue if you have found a solution.

@emersion
Copy link

This issue is specifically about windows which aren't simple 2D surfaces. Your solution would be useful if one wanted to communicate the position and orientation of a 2D surface inside a 3D space, but that's not what this issue is about.

@stonesthrow
Copy link
Contributor

I suspected so. OK. Close this if you are done. I have not had any further discussion from OpenGLES WG since my last report.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants