New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: GPU shared texture offscreen rendering #42001
base: main
Are you sure you want to change the base?
Conversation
💖 Thanks for opening this pull request! 💖 We use semantic commit messages to streamline the release process. Before your pull request can be merged, you should update your pull request title to start with a semantic prefix. Examples of commit messages with semantic prefixes:
Things that will help get your PR across the finish line:
We get a lot of pull requests on this repo, so please be patient and we will get back to you as soon as we can. |
3e3b554
to
433bfc6
Compare
b6ba13e
to
9b15446
Compare
Ready for review. Any suggestions? Here are some answers in advance:
|
Tested to be working on Windows, example repo 1.webm |
Speaking as an interested outsider: how possible would it be to make it so that the texture could be used in the renderer process via webGPU with a workflow something like: Main Process const appWin = new BrowserWindow({ width: 800, height: 800 })
appWin.loadFile('index.html')
const win = new BrowserWindow({ webPreferences: { offscreen: true, offscreenUseSharedTexture: true } })
win.webContents.on('paint', (event, dirty, image, texture) => {
appWin.webContents.send('texture', texture)
})
win.loadURL('https://github.com') Renderer Process ipcRenderer.on('texture', (_event, source) => {
// Or similar function for chromium shared textures
const externalTexture = gpuDevice.importExternalTexture({ source })
// Use the texture in a webGPU pipeline and eventually
// render the result to a canvas element
}) I'm biased as this is something I've been dreaming of for a long time, but if it is relatively simple to achieve, and performant, I think it could make this feature a lot more useful and accessible for the majority of the userbase. |
Definitely possible. Actually I already did some research when I was walking through dawn's source. At least on Windows you can import a DXGI handle into dawn world (via webgpu native), then you can try convert it to a v8 webgpu texture to use it anywhere. There's no existing js api for this, you need to write one. |
// PendingRemote instances may be freely moved to another thread/sequence, or I think ipcMain ipcRenderer doesn't support passing a pending_remote mojo? so I think the release callback it not able to pass to another process. The only reason will be supporting async on paint event to let you release the texture after some awaits. |
@Hate-Usernames Good news: I managed to let user able to release the texture when they want, which means you can pass the texture to wherever process you want, but releasers need to be maintained in the callback process and cannot be ipc. I think it can already do what we want, nnaintaining that would be easy with few more ipc calls, like posting the texture to the other process, and it replies a release message, then main process call the releaser. About WebGPU, I think we can make that possible in another PR. I think CEF would also be interested in such capability. It will be awesome to directly use the texture in WebGPU pipelines. |
I think the CI machines are dead for this PR? 🙂 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting! I'm surprised that the code changes are so small. The API as written is a bit clunky, I've left some notes.
Also, I feel like it ought to be possible to pass the texture handle across a mojo pipe to be released in another process, so if you're running into trouble with that I'm curious what the roadblock was!
Finally, OSR is quite a neglected feature of Electron, and a huge part of that is that the test coverage is quite poor. This PR adds a largish new API surface and will definitely need test coverage before it can be considered for merging.
# OffscreenSharedTextureReleaser Object | ||
|
||
* `release` Function - Release the resources. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this an object to hold a single function? Can this just be a function on its own?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find it difficult to declare the return type of takeReleaser
in the docs: Function\<Function\>
? Function\<() => void\>
? So I move it to a new structure to have better typing.
Maybe I would better move the OffscreenSharedTexture
from structures to classes as classes has returns
section?
* `takeReleaser` Function\<[OffscreenSharedTextureReleaser](offscreen-shared-texture-releaser.md)> - Take the texture's ownership, | ||
prevent it being automatically released at callback end. The releaser cannot be passed to another process, users need to maintain | ||
releasers in current process, but it is safe to pass the `textureInfo` to another process. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it perhaps be better to have the user always release the texture manually? It feels a little odd to have two different paths for async vs sync access, and while it's convenient to not have to call release for some use cases, I'm not sure the convenience is worth the inconsistency.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I agree with that. Its just a one line code, and user will not get more textures (pool max size) if they don't release. Maybe we provide some documenting.
const win = new BrowserWindow({ webPreferences: { offscreen: true, offscreenUseSharedTexture: true } }) | ||
win.webContents.on('paint', (event, dirty, image, texture) => { | ||
if (texture) { | ||
// importTextureHandle(dirty, texture.textureInfo) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be great to have a little more info on how an app can actually use this. Perhaps in a separate guide doc?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is currently very user dependent, personally I use a spout plugin here to share the texture to external processes. So I think it maybe difficult to provide an pure electron example?
But good news is, it's possible to make WebGPU support importing such texture (with future works). However we probably need to find out how to pass PendingRemote
to arbitrary processes as it needs to be imported at destination process. (Or not, you can always just pass the textureInfo
and let the original process know to call the release).
fs.writeFileSync('ex.png', image.toPNG()) | ||
win.webContents.on('paint', (event, dirty, image, texture) => { | ||
if (texture) { | ||
// Import the shared texture handle to your own rendering world. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you're changing the example to use shared textures by default, it should actually use them :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good suggestion! But like I described above, there's not a valid use purely inside electron, unless we managed to make it work with WebGPU.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a simple native module we could use (or write) that would do something straightforward like read all the pixels out of the texture?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Of course, if you want to write it for testing I could write a testing node native module to copy the texture to a staging texture and read out the pixel data.
However this is for Windows only, other systems may have similar approaches. Linux can map, unsure about macOS about getting pixels from IOSurface, I probably only have time to write a Windows one.
I might need to know more about where do you want to put such module and if it is only for testing or just providing an example.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Testing would be the more important path, and I think it's OK to just test one platform at first.
For documentation, I think it would be better to leave this example to use the non-shared-texture path by default, perhaps with a comment about shared textures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I will take a look about how to add a test, considering we need to compile (or just use precompiled binary?) it maybe complicated than other tests? It would be best if you could give some suggestions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have a couple of native modules in the repo already for testing this sort of thing. see spec/fixtures/native-addon/echo
|
||
Maintain this patch in https://crrev.com/c/5465148 | ||
|
||
Change-Id: Id90acd587f17acd228ae7cb5ef93005eb8388b80 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this patch required for this feature? Why is this mutex there in the first place? Why can't this CL be merged upstream?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not mandatory, but can provide an extra performance gain. Also as commented by OBS teams, texture mutex have performance spikes when acquiring them, and it may cause driver failure in extreme condition or old systems. Also according to the use case by OSR, the texture will not be written to again so a mutex is not requried. It is a tiny patch and I think it can be maintained easily, in general I think is worth to patch to eliminate performance spikes. CEF already accepts this additional patch.
Sadly it is the only one patch that not possible to merge into upstream, as OSR is not the only use case in chromium, it was used as GMB (GpuMemoryBuffer) so a mutex is needed in some other code paths.
* `event` Event | ||
* `dirtyRect` [Rectangle](structures/rectangle.md) | ||
* `image` [NativeImage](native-image.md) - The image data of the whole frame. | ||
* `texture` [OffscreenSharedTexture](structures/offscreen-shared-texture.md) - The GPU shared texture of the frame. Will be `null` if `webPreferences.offscreenUseSharedTexture` is `false`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of extending the argument list, let's put stuff on the event. See https://github.com/electron/governance/blob/main/wg-api/best-practices.md#how-will-this-api-be-extended-in-the-future and https://github.com/electron/electron/pull/37085/files#diff-261075c0abf5bfd33be70313bcd3699da9b1222b6f27df0e8294fc63789f160bR1795-R1807 for an example. The C++ side of this is still a little rough but it beats long inscrutable argument lists.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SGTM! I will implement these soon.
Hi! Thanks for the review.
According to the comments in chromium is possible to pass a In this OSR senario, I passed most of shared texture info to v8 by conversion, but I took a glance at electron's ipc and i think the serialization / deserialization process is not able to pass a So currently I wrapped the In conclusion, it is possible to pass Hope you can help me out. |
Description of Change
I managed to add
ARGB + kGpuMemoryBuffer
support for FrameSinkVideoCapturer in recent upstream changes. Thus, we can finally use zero-copy (actually one, there's a CopyRequest of frame texture) GPU shared texture OSR in chromium apps.This will be the fastest OSR mode than the existing two, and it supports enabling hardware acceleration.
Details:
webPreferences.offscreenUseSharedTexture
to enable this feature.texture
parameter topaint
event ofwebContent
.Fix: #41972
Checklist
npm test
passesRelease Notes
Notes: feat: GPU shared texture offscreen rendering.