Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Off-screen rendering with webview2 #547

Open
ajaymonga opened this issue Oct 20, 2020 · 109 comments
Open

Off-screen rendering with webview2 #547

ajaymonga opened this issue Oct 20, 2020 · 109 comments
Assignees
Labels
feature request feature request tracked We are tracking this work internally.

Comments

@ajaymonga
Copy link

ajaymonga commented Oct 20, 2020

Hi
Is it possible to get webview2 to render into a shared memory region like CEF :
https://bitbucket.org/chromiumembedded/cef/wiki/GeneralUsage#markdown-header-off-screen-rendering

In my application I use 2 process architecture where main process does not have network access and second process uses CEF to render webcontent into a shared memory region from where main app can read the pixels. I am wondering if I can achieve this using webview2

Thanks

AB#28491736

@champnic
Copy link
Member

Currently this is not possible. We have offscreen rendering on our backlog, and are tracking it in #20, but I think this ask is clearer so I'll also add this issue to our item. Thanks!

@Harvey3141
Copy link

Hi, do you have any updates on this which you're able to share?

@champnic
Copy link
Member

Unfortunately not yet. We have not begun work on this yet. This is a large amount of work, and while very high on our priority list, gets bumped each quarter as higher priority asks come in. I really want to do this work as it's currently one of our top asks, but the earliest that could happen is Q1 2022 at this point.

@avail
Copy link

avail commented Apr 8, 2022

Any news on this?

@champnic
Copy link
Member

champnic commented Apr 8, 2022

We are starting on the design phase of this work. Would you mind sharing your use case so that we can consider it in our plans?

@doxxx
Copy link

doxxx commented Apr 8, 2022

We are currently using CefSharp to render into an offscreen buffer that is sent to an FPGA to be composited onto a live video feed. The render includes playback of MPEG4-encoded video. We were looking into using WebView because CefSharp's default Chromium build does not include the MPEG4 codec. But without offscreen rendering, that's a moot point. We've since produced our own custom build of Chromium to include the codec.

@rjx-ray
Copy link

rjx-ray commented Apr 13, 2022

Our use case is to show web content in our immersive VR spaces. See https://www.igloovision.com/software/enterprise-package/igloo-web

We need access to the web rendered textures so that we can warp and blend multiple images to projector outputs to create a clear seamless view on the inside of a cylinder or room.

Currently we use CEF for the web input with a user maintained patch for getting the shared textures. See https://bitbucket.org/chromiumembedded/cef/pull-requests/285 and read down to the end of the comments for full details. We build CEF with proprietary codecs included. The builds of our app and CEF are done with C++.

This is unsatisfactory because it uses the deprecated custom compositing (which could be removed in future) instead of skia and is difficult to keep updated.

We would look at moving to webview2 if access to the rendered textures could be provided and supported along with the ability to include proprietary codecs

@champnic
Copy link
Member

Thanks for the info @rjx-ray! If you don't need high-frequency rendering you could consider using the CapturePreview function to get an image, but this isn't great for things like videos or other animations.

@rjx-ray
Copy link

rjx-ray commented Apr 13, 2022

Hi @champnic, thanks for the response but we do need high-frequency rendering, typically for YouTube and other web video display.

@avail
Copy link

avail commented Apr 17, 2022

We are starting on the design phase of this work. Would you mind sharing your use case so that we can consider it in our plans?

Game UI development, specifically within DirectX 9 (Ex) / 10 / 11 contexts.

Ability to render into an offscreen texture and display that as overlay of the game.

@champnic champnic added the tracked We are tracking this work internally. label May 10, 2022
@jcain82
Copy link

jcain82 commented May 11, 2022

We are starting on the design phase of this work. Would you mind sharing your use case so that we can consider it in our plans?

Currently, we utilize CefSharp Offscreen for generating a PDF from HTML content or using it to generate screenshots of HTML in a server side environment.

@zziger
Copy link

zziger commented Jul 17, 2022

Hi, are there any updates on this feature?

@lencil
Copy link

lencil commented Jul 21, 2022

We are starting on the design phase of this work. Would you mind sharing your use case so that we can consider it in our plans?

CAD editor software, I want to compose web ui and opengl/d3d rendering together.

@alainza
Copy link

alainza commented Jul 21, 2022

Any XR Application wanting to be able to have some 2D UI inside the 3D Environment: it means being able to render into a DirectX Texture and to be able to inject input in the off-screen view (pointer + keyboard)

@champnic
Copy link
Member

Thanks for the info! Unfortunately our design work is slow going due to high priority bugs, but we're still making progress.

@danielvandenberg95
Copy link

Is #579 related?

@honzapatCZ
Copy link

My use case would be Game Overlay.
And actually dotnet/maui(and in extension WPF/WinForms)'s BlazorWebView could benefit from this change too as they wouldn't have to create the fake elements(they are just precisely tracking the WebView window on top), but could do normal real elements in their respective apis without any weird quirks and bugs.

@honzapatCZ
Copy link

I'd say an api like CefSharp would be quite pleasant and easy to port over existing CefSharp code.
https://github.com/cefsharp/CefSharp/blob/d8674fd076c021eddcc0cb579687ca3c51a63767/CefSharp.OffScreen/DefaultRenderHandler.cs

@Harvey3141
Copy link

Hi, do you have any updates you're able to share?

@VentuzTammoHinrichs
Copy link

VentuzTammoHinrichs commented Nov 4, 2022

Adding another use case: Presentations on screens that are not just Windows desktops.

From our experience with CEF we'd need the following (in descending priority order) to switch:

  • Rendering into either a raw CPU buffer or into a fully accelerated DXGI surface (higher level APIs optional but this is what most people use)
  • Full control over the presentation parameters, such as resolution, frame rate, display scaling, output color space (including WCG/HDR), etc. Fully manual VSync is appreciated; the client engine might render ahead and buffer, or batch render to disk, so it's not strictly in real time.
  • Proper graphics resource lifetime management. Client app must be able to exactly specify when/how it is able to receive frames and when it is finished processing them; no spurious callbacks when client isn't ready (anymore), and no mutexes that render either side unresponsive when held a bit too long or not acknowledged (that one probably came out a bit too detailed, but you might be able to feel the pain here).
  • Full manual input injection, such as keyboard, mouse, and multi touch inputs.
  • Audio should also be redirected. Ideally per browser instance but globally would be fine at first. Client must be able to specify the sample format and channel layout, and it should be a pull model in which the client requests a certain # of sample frames and WebView2 outputs that number of frames exactly, in order to avoid timing discontinuities between the browser audio and the actual audio path the client uses.
  • ... and you might recognize a pattern here. There are other subsystems that would be cool to have redirected to an API, for example audio/video input (for web based streaming/conference services) or geolocation. A feature to auto-block all outside interaction that's not intercepted (Midi, Bluetooth, notifications, printing (possible security hole there!), etc) would perhaps be helpful, too.

It might be tempting to keep these APIs as close to the corresponding Windows APIs as possible but it's not strictly necessary - this feature will mostly be used from deep within client code, and very possibly behind a bunch of abstraction layers, so I'd aim for minimal and clean first.

Hope this was not too much/harsh but currently a whole industry is dependent on that CEF pull request above which has been in PR limbo for years and will soon just stop working altogether, and an alternative to that would be a very appreciated thing ;)

@DanielsCode
Copy link

@champnic - Is there any news you can share with us?

@robotsinthesun
Copy link

+1

@aidv
Copy link

aidv commented Apr 3, 2024

For example, just pushing the texture data to the GPU takes 1.2s, while the equivalent pure C code takes 1.2ms (!!!!!!).

You can try implementing this logic with WebGPU Native, and expose some handle to javascript in frameworks like CEF, just keep everything in native. You can create a WebGPU context natively and just expose them in javascript's WebGPU objects.

Really?? That would be perfect.

Please tell me how this could be achieved, it’s critical to our application.

I’m willing to pay for help.

@aidv
Copy link

aidv commented Apr 3, 2024

You can create a WebGPU context natively and just expose them in javascript's WebGPU objects.

I have no idea how this would be done with WebView2. Any idea?

@softworkz
Copy link

I'm looking for the fastest way to push large amounts of data to a WebGL/WebGPU context and render something in the WebGL/WebGPU scene.

The pipeline looks like this: CPU -> IPC -> WebView2 -> GPU

Which kind of IPC are you using? Have you tried ICoreWebView2SharedBuffer?
https://learn.microsoft.com/en-us/microsoft-edge/webview2/reference/win32/icorewebview2sharedbuffer

@aidv
Copy link

aidv commented Apr 3, 2024

I'm looking for the fastest way to push large amounts of data to a WebGL/WebGPU context and render something in the WebGL/WebGPU scene.
The pipeline looks like this: CPU -> IPC -> WebView2 -> GPU

Which kind of IPC are you using? Have you tried ICoreWebView2SharedBuffer? https://learn.microsoft.com/en-us/microsoft-edge/webview2/reference/win32/icorewebview2sharedbuffer

Yes I'm using ICoreWebView2SharedBuffer, but it's extremely slow. Edit: It's also synchronous, so it blocks the UI.

Sending 62MB of data from the backend to the webview2 takes about 4 seconds.

It's actually faster to save the data on disk, host a webserver, and use fetch() in the frontend to get the data, about 380ms.

So ICoreWebView2SharedBuffer is very inefficient, but I'd want to access the WebGL/WebGPU scene directly from the native backend if possible, but render the scene in the frontend.

This seems to be quite a big challenge.

@softworkz
Copy link

Yes I'm using ICoreWebView2SharedBuffer, but it's extremely slow. Edit: It's also synchronous, so it blocks the UI.

Oh, didn't know it's so bad. Their file access isn't really fast either. Unfortunately they have removed support for pepper plugins (NACL). We still have an application that is using it for rendering videos on surfaces via OGL ES (Electron App using a trick to enable it), but it doesn't work in the Edge WebView.
A while ago, I had talked to Chromium devs about this gap (unable to render some, content onto a surface inside the browser), but unfortunately they didn't have a good replacement to suggest.
We're using a layered approach now, i.e. transparent webview on top. You could also have another one below...

@softworkz
Copy link

It's actually faster to save the data on disk, host a webserver, and use fetch() in the frontend to get the data, about 380ms.

Assuming, your data is image data, how does this compare?

  let image = document.createElement('img');
  image.src = 'file:///c:/my62mbImage.png'
  await img.decode();
  let bitmap = await createImageBitmap(image);

It's non-blocking at least.

@aidv
Copy link

aidv commented Apr 3, 2024

It's actually faster to save the data on disk, host a webserver, and use fetch() in the frontend to get the data, about 380ms.

Assuming, your data is image data, how does this compare?

  let image = document.createElement('img');
  image.src = 'file:///c:/my62mbImage.png'
  await img.decode();
  let bitmap = await createImageBitmap(image);

It's non-blocking at least.

This gives pretty much the same performance as fetchibg from a webbrowser. Very fast.

But problem is that this would require some disk io, which isn’t optimal.

You mentioned a transparent webview, I thought about that too, simply create a native opengl scene and overlap a transparent webview on top.

I guess this approach is valid too.

Any caveats to think of if I use this approach?

@softworkz
Copy link

This gives pretty much the same performance as fetchibg from a webbrowser. Very fast.

But problem is that this would require some disk io, which isn’t optimal.

Maybe using overlapped (memory-mapped) IO, but when you change the data (which would be just a memcopy), I'm not sure whether the browser engine would read it again, rather than assuming it has cached it already, and I don't know whether it is possible to rename a file which is opened in this mode.
Browser caching itself might also involve copying of the data, yet I'm not sure whether this is done for file: URLs.

It's really frustrating when you look through all of those new browser APIs. They have tons of features in many directions, but there's hardly any good and fast way for getting local data into the browser...
I've read an article abouit "transferrable objects" in WebAPIs where it was shown that a 32MB blob could be transferred between WebWorker and main thread in 6.6ms instead of 360ms. On Android, it's possible to transfer such transferrable objects between the (Chromium) WebView and Java native code (did that last week for a MessagePort, those don't have much data, so can't say much about performance). Yet, the Edge WebView2 doesn't have something like that. They only have that ridiculously complex mechanism where you need to create COM interfaces for the native object which you want to share with the webview. I'm not sure how that performs, but it might be another option in case you haven't tried.

Any caveats to think of if I use this approach?

WebView Transparency doesn't work in WinUI3 apps and you cannot render WinUI3 content on top of a WebView2 (there's a way though, but with other caveats).
Also, CSS filters don't work on the background (because it don't know about). But everything else is fine. We're using this approach on many platforms.

@aidv
Copy link

aidv commented Apr 3, 2024

This gives pretty much the same performance as fetchibg from a webbrowser. Very fast.
But problem is that this would require some disk io, which isn’t optimal.

Maybe using overlapped (memory-mapped) IO, but when you change the data (which would be just a memcopy), I'm not sure whether the browser engine would read it again, rather than assuming it has cached it already, and I don't know whether it is possible to rename a file which is opened in this mode. Browser caching itself might also involve copying of the data, yet I'm not sure whether this is done for file: URLs.

It's really frustrating when you look through all of those new browser APIs. They have tons of features in many directions, but there's hardly any good and fast way for getting local data into the browser... I've read an article abouit "transferrable objects" in WebAPIs where it was shown that a 32MB blob could be transferred between WebWorker and main thread in 6.6ms instead of 360ms. On Android, it's possible to transfer such transferrable objects between the (Chromium) WebView and Java native code (did that last week for a MessagePort, those don't have much data, so can't say much about performance). Yet, the Edge WebView2 doesn't have something like that. They only have that ridiculously complex mechanism where you need to create COM interfaces for the native object which you want to share with the webview. I'm not sure how that performs, but it might be another option in case you haven't tried.

Any caveats to think of if I use this approach?

WebView Transparency doesn't work in WinUI3 apps and you cannot render WinUI3 content on top of a WebView2 (there's a way though, but with other caveats). Also, CSS filters don't work on the background (because it don't know about). But everything else is fine. We're using this approach on many platforms.

Yeah, it just doesn't seem like there's an easy way to do it. I am however interested in what @cnSchwarzer proposed regarding having a shared WebGPU handle.

This way I could have all the algorithmic stuff happen in the C backend, while the results would be rendered in the WebView2 frontend.

That would be an ultimate solution.

@softworkz
Copy link

softworkz commented Apr 3, 2024

Yeah, it just doesn't seem like there's an easy way to do it. I am however interested in what @cnSchwarzer proposed regarding having a shared WebGPU handle.

Frankly, I think that's just nonsense.

The statement was:

You can create a WebGPU context natively and just expose them in javascript's WebGPU objects.

WebGPU is a browser API which wraps native platform functionality. How should it be possible to create that "natively"? There is no "native WebGPU context" you can create from outside. And further, there's no WebGPU API which allows you to connect to an outside native context. Neither in one nor in the other direction.
An HTML element like would be perfect, which can be part of the DOM but operated from outside...

@aidv
Copy link

aidv commented Apr 3, 2024

Yeah, it just doesn't seem like there's an easy way to do it. I am however interested in what @cnSchwarzer proposed regarding having a shared WebGPU handle.

Frankly, I think that's just nonsense.

The statement was:

You can create a WebGPU context natively and just expose them in javascript's WebGPU objects.

WebGPU is a browser API which wraps native platform functionality. How should it be possible to create that "natively"? There is no "native WebGPU context" you can create from outside. And further, there's no WebGPU API which allows you to connect to an outside native context. Neither in one nor in the other direction. An HTML element like would be perfect, which can be part of the DOM but operated from outside...

There’s Dawn, a WebGPU thing for C++.

I’m not sure how it would work, but I’m open to any ideas.

@softworkz
Copy link

Well - that IS the WebGPU implementation used in Chromium...

What would be another option is to fork Electron. This gives you access to absolutely everything (i.e. Chromium source level modifications).

@softworkz
Copy link

softworkz commented Apr 3, 2024

You can create a WebGPU context natively and just expose them in javascript's WebGPU objects.

There's anothher reason why this cannot work: There's a process boundary. There's no way for sharing a (whatever kind of) GPU context between diferent processes. Whatever you do would need to happen within the same process, i.e. the browser's rendering process.
Which brings me back to pepper/nacl plugins. These are loaded into the browser process and you can render to OpenGL surfaces directly. Going the Electron way would allow you to keep this enabled until there's a better way.

@reitowo
Copy link

reitowo commented Apr 3, 2024

I think it is definitely possible to create some modification around Dawn to make it satisfy your needs.
There're plenty of Dawn api which is not exposed to js (not in WebGPU standard, obviously), and I was digging around to see how to import a external texture into the dawn context, and find out it is absolutely possible. You'll need to figure out Dawn representatives in v8 engine, then you can do whatever you want once you got them.
I cannot give any guarantee, but I think you will not be disappointed if you look into it.

@reitowo
Copy link

reitowo commented Apr 3, 2024

I was talking about framework (like CEF) level modification from the beginning. This is obviously not a job can be done in WebView2 api. Whatever, it is off topic, and shouldn't continue in this issue.

@aidv
Copy link

aidv commented Apr 3, 2024

I think it is definitely possible to create some modification around Dawn to make it satisfy your needs. There're plenty of Dawn api which is not exposed to js (not in WebGPU standard, obviously), and I was digging around to see how to import a external texture into the dawn context, and find out it is absolutely possible. You'll need to figure out Dawn representatives in v8 engine, then you can do whatever you want once you got them. I cannot give any guarantee, but I think you will not be disappointed if you look into it.

Yeah, this veered off topic so I’ma just shut up about it.

But before we put the lid on this, what would you recommend me to look into in terms of dawn and electron?

@softworkz
Copy link

softworkz commented Apr 3, 2024

I was talking about framework (like CEF) level modification from the beginning

It would require modifications to the Chromium source, not only the framework - which is how I had understood your comment. Sorry for misunderstanding.

Yeah, this veered off topic so I’ma just shut up about it.

Maybe you just create a new issue...
(like about content rendering to a panel element in the DOM from the native side)

@mdrejhon
Copy link

mdrejhon commented Apr 4, 2024

Missed this, so replying to @fredemmott

Sorry, I wasn’t clear - I have no trouble with VRR 240 in chrome;

To be clear -- I require ability of custom perfect-framepaced arbitrary frame rates below Max Hz.

e.g. 57fps looks exactly like 57Hz, with a green-colored VALID and very flat www.testufo.com/animation-time-graph. Or 123.5fps that looks like perfect stutter free 123.5Hz Etc.

I need a fully dynamic-framerate-capable (not hardcoded to maxHz) VRR framepacing that is correctly refreshing at at dynamic asynchronous refresh cycles, and not to default Windows' scheduled MaxHz-refreshes-per-seconds during DWM+VRR ops. Chromium code design does something to force refreshing of the Chromium framebuffer at every DWM scheduled refresh cycle somehow, spoiling VRRs' raison-detre.

Before W3C moved to WHATWG, I tried to post a suggestion for a VSYNC API that would solve this. w3c/html#375 ... For now, I have given up on browsers' complete inability to do proper true VRR.

So the current fallback plan is to use some offscreen CEF style system, and simply sieze over control over frame presentation to do it the proper way. Hopefully there's no hardcoded tick-tock built into Chromium (Chrome automatically uses 60fps on displays it cannot sync to refresh rate to, e.g. older Linux distributions), or it ruins my plans.

Part of the reason is skills silos -- browser developers are VERY skilled at designing browsers, but don't understand VRR engineering. I believe browser developers don't quite fully understand VRR - so this "browser bug" (non-true VRR during sub-MaxHz frame rates) has existed for almost a decade;

So if I set a 57fps cap (by any means, like busywaits inside requestAnimationFrame() or a provided sanctioned technique...), I need the monitor sending photons to my eyeballs exactly 1/57sec apart (as VRR is designed to do so) and not rounded-off to the next refresh cycle scheduled by Microsoft Windows DWM (even when Windowed GSYNC is enabled). In true VRR operations, the frame Present() or glxxSwapBuffers() call, immediately causes the display to refresh. That's how video games do it.

Why?

  1. Display quality can vary at different VRR framerate=refreshrates.
    Engineering Info: Overdrive artifacts (LCD ghosting/coronas) can improve/worsen at different frame rates, due to how overdrive algorithms are optimized for specific refresh rates and not the whole VRR continuum.
  2. Over 500 content creators (with ~100 million viewership totalled) use tests invented by Blur Busters.
    Displays vary greatly in quality and reviewers would love to test different TRUE VRR frame rates in TestUFO without Chrome & Windows force-refreshing MaxHz times per second.
    For my credentials, see www.blurbusters.com/inventions

Yes, I have tried the framerate cap setting in Chrome, and it doesn't bypass the forced DWM refreshing, unlike many other apps can.

sorry for being unclear: I have no trouble with 240fps with gsync in chrome; that is not translating to the same with WGC+webview2

Desired solution:

  • Ability to do arbitrary erratic-stutter-free frame rates below MaxHz with perfect framepacing (as software/hardware allows).

I am still deeply dissapointed that browsers still don't do proper VRR (during sub-MaxHz frame rates).

This is why I am moving my plans to doing offscreen rendering, to workaround this "browsers cant do VRR" bug.

Developer TIP (for Chromium engineers)

Debugging suggestion for Chromium software developers (if anyone reads this):

  1. Enable VRR in both your monitor menus AND your graphics drivers
  2. Run a browser CANVAS-2D animation in full screen mode (as if you're playing a browser-based game)
  3. Turn on your monitor's frame rate / refresh rate setting (most VRR monitors have an OSD that reports current refresh rate)
  4. Make sure that the monitor's detected VRR refresh rate matches what shows up in the web browser/diagnostics/your own debug framerate counter.
  5. Test arbitrary frame rates inside your display's supported VRR range.
  6. TEST PASS CONDITION: For a 240Hz monitor -- if your animation is running at 87.5fps inside the monitor's published VRR range -- your monitor should be reporting 87.5 (true VRR) rather than 240 (automatic compositor in DWM or Chromium)

@Hethsron
Copy link

Hi
Sorry to come back to the subject, but a lot has happened since it was opened.

And these days, I think there are solutions that should allow you to provide WebView2 on Linux and especially to add this mode.

So, is it possible to provide offscreen-rendering using Ozone Layer ? (cf. chromiumembedded/cef#3263)

The CEF team will be migrating to this architecture. So if, like them, your heart is in Chromium, you can either do what they do or do it better.

Regards

@reitowo
Copy link

reitowo commented Apr 26, 2024

As you can see in chromium/src/ui/ozone/platform, there's no clue about ozone Windows/macOS support, which I think Chromium/CEF won't migrate to this shortly. In fact, it is also hard to know which platform the OSR host is on, so I think sharing dmabuf is a better solution for now on Linux.
FYI, the viz based OSR has been merged to CEF recently: https://bitbucket.org/chromiumembedded/cef/pull-requests/734
I think Webview2 can easily implement GPU OSR using FrameSinkVideoCapturer.

@Hethsron
Copy link

Hethsron commented Apr 26, 2024

Hello @reitolab

I don't think it was merged because this pull request was rejected. The CEF strategy is described here: chromiumembedded/cef#3685
chromiumembedded/cef#3681

What is certain is that Microsoft has what it takes to offer OSR on both Linux and Windows.

@reitowo
Copy link

reitowo commented Apr 26, 2024

@Hethsron

  1. It is, the decline message says: Manually merged in master revision 260dd0ca24 with minor style and documentation fixes.
  2. I don't think switching alloy/chrome affects off-screen rendering, they are seperated modules to me.
  3. The OSR is implementable on all platforms (Windows/macOS/Linux), as CEF already does.

@Hethsron
Copy link

Hethsron commented Apr 26, 2024

@reitolab

I think Webview2 can easily implement GPU OSR using FrameSinkVideoCapturer.

So what's the target date? When will we have this mode?

@microdee
Copy link

this hasn't been done in 4 years, so I wouldn't expect it in this decade either

@reitowo
Copy link

reitowo commented Apr 26, 2024

I'm not the Webview2 dev, so I can't give any date. I was posting CEF solution here just to provide a example, and hoping someone in MS could done it. I've working on that viz solution for 3 months and it finally merged into cef 3 days ago.

@Hethsron
Copy link

this hasn't been done in 4 years, so I wouldn't expect it in this decade either

@microdee
Let's be optimistic.

One thing's for sure: if they need help, they can count on us. Right? @reitolab

@Hethsron
Copy link

Hello @bradp0721

Don't you have any specific information to give us on the subject?

What are your current bottlenecks?
How can we help you?

Regards

@reitowo
Copy link

reitowo commented Apr 26, 2024

Maybe it is faster to apply a MS job and implement it yourself

@Hethsron
Copy link

Maybe it is faster to apply a MS job and implement it yourself

I'm fine. I'm just helping out. @reitolab Are you employed by CEF?

@reitowo
Copy link

reitowo commented Apr 26, 2024

No, I'm not. I think we shouldn't continue chat here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request feature request tracked We are tracking this work internally.
Projects
None yet
Development

No branches or pull requests