Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wayland Support #18

Open
dcommander opened this issue Nov 19, 2015 · 32 comments
Open

Wayland Support #18

dcommander opened this issue Nov 19, 2015 · 32 comments

Comments

@dcommander
Copy link
Member

My (admittedly limited) understanding of Wayland is that it is essentially frame-based and image-based from the get-go. All of the drawing and rendering takes place within the application, and the application then tells the Wayland compositor, "Here's a frame. Display it in this window." This is a very natural fit for VNC, since a VNC server could simply be a Wayland compositor that processes each frame from an application and converts it into a single RFB framebuffer update. It would greatly simplify the VNC server, since it would no longer be necessary for it to hook into low-level drawing primitives or to convert the inherently single-buffered, fine-grained drawing approach used by X11 into a more "frame-like", coarse-grained, image-based approach suitable for remote interaction.

At first glance, it also seems like this could provide a straightforward way of supporting:

  • Seamless windows. Since each Wayland application is generating frames, not primitives, each window essentially acts-- from the point of view of the compositor-- like a video stream, so it would be straightforward to have a server-side compositor that translates the windows into separate RFB streams and a client application that translates them back into windows.
  • Remote 3D. The details are still sketchy, but my understanding is that Wayland applications will be required to handle GPU access on their own, which may eliminate the need for VirtualGL (except for supporting legacy X11 applications.) One would assume that a typical Wayland OpenGL application would already be doing off-screen rendering (via EGL) and that calling eglSwapBuffers() would move the off-screen image into the compositor, but I'm just guessing at this point. It may still be necessary to have a VirtualGL-like solution that straightforwardly marshals EGL commands to the Wayland compositor that is attached to the GPU rather than the hypothetical TurboVNC Wayland compositor.
  • Frame-based video codecs like H.264. Since Wayland actually does have a concept of frames, it would be natural to design a Wayland VNC server such that particular windows (3D windows, for instance) are transmitted as H.264 streams, thus eliminating all of the problems inherent with using H.264 in the existing VNC architecture.
@dcommander
Copy link
Member Author

Refer also to VirtualGL/virtualgl#10

@dcommander
Copy link
Member Author

dcommander commented Dec 1, 2016

After experimenting with Weston under Fedora 25, it seems that my assertion regarding GPU access was incorrect. Wayland GL applications seem to work using a similar paradigm to GLX applications. They obtain a Wayland display handle by calling wl_display_connect() (the equivalent of XOpenDisplay()), and that display handle is subsequently passed into EGL functions such as eglCreateContext() and eglMakeCurrent(). Essentially, Wayland applications access the OpenGL renderer through the compositor, in much the same way that a GLX application would access the OpenGL renderer through the X server. Thus, when running Weston in headless or RDP mode, the renderer that the application obtains does not appear to be hardware-accelerated. I inserted the following code into the weston-gears and weston-simple-egl applications, after the call to eglMakeCurrent():

printf("OpenGL renderer: %s\n", glGetString(GL_RENDERER));
printf("OpenGL vendor: %s\n", glGetString(GL_VENDOR));

When running on the "root" display (WAYLAND_DISPLAY=wayland-0), the applications report the following:

OpenGL renderer: Gallium 0.4 on NVC1
OpenGL vendor: nouveau

(NOTE: Cannot use nVidia's proprietary drivers yet, because they don't yet have Wayland support)

When launching Weston on the root display, which causes it to use the Wayland back end, the GL applications report the same OpenGL renderer and vendor. When launching Weston using the RDP or headless back end, then the GL applications report the following:

OpenGL renderer: Gallium 0.4 on llvmpipe (LLVM 3.8, 128 bits)
OpenGL vendor: VMware, Inc.

Having "VMWare, Inc." as the vendor string is a bit suspicious and immediately brought to mind the fact that VMWare uses Gallium to forward OpenGL calls from the guest O/S to the host O/S. I thought at first that Weston might be doing likewise, but the abysmal performance of weston-gears with the RDP and headless back ends (8-9 fps) would seem to indicate that it is truly using software rendering (or is it forwarding the OpenGL to the client?! It just occurred to me that that might explain why the RDP compositor didn't work with Microsoft's RDP client-- it required the FreeRDP client.)

This would suggest that one of the following is likely going to be necessary in order to achieve a multi-user remote display solution with server-side hardware-accelerated 3D rendering under Wayland/Weston:

  1. Some form of VirtualGL-like interposer for EGL, although it would probably be somewhat simpler than the current interposer. I would prefer to piggyback it on the existing code base, if possible, since it would share a lot of the same infrastructure, although it might make sense to put the EGL interposer in its own directory. It would almost certainly need to be launched with a different script (veglrun, perhaps.) This interposer would have to intercept the EGL calls and rewrite them such that the context is created on a DRM device (this would require Access the GPU without going through an X server VirtualGL/virtualgl#10 as a prerequisite) rather than on a Wayland display. Then it would have to maintain a set of Wayland rendering surfaces-- similar to how the current X11 transport maintains a set of XShm images-- read back the pixels into these surfaces in a round-robin fashion, and send them to the compositor. This would be, architecturally, very similar to how VirtualGL currently works in an X proxy environment.

  2. A Weston back end that somehow accomplishes the same as the above-- redirecting OpenGL rendering to an off-screen buffer on the GPU. I don't yet have a thorough enough understanding of the Wayland/Weston architecture to know whether that is possible, but it seems like it would minimally require some sort of VirtualGL-like split rendering at the compositor level rather than the application level. It seems that the compositor has similar limitations to an X server, in that it can't support OpenGL hardware acceleration unless it is attached to the GPU somehow. Using the DRM or fbdev back ends requires root access and doesn't seem to be designed for a multi-user approach such as what we're trying to achieve. Basically we want the compositor to render off-screen with hardware acceleration.

As far as implementing a TurboVNC server on top of this, it seems like much of the existing VNC server code, minus the parts that touch X.org, could be refactored into a Weston back end. It wouldn't be "easy" per se, but it would be at least straightforward. There are a lot of unanswered questions, though, such as how to start window managers in that environment, etc. But it seems as though the basic paradigm of a Wayland server, in terms of how it handles multiple displays, etc., is similar enough to an X server that the existing X proxy architecture could be adapted to it.

@dcommander
Copy link
Member Author

The Wayland developers seem to think that Approach 2 is viable. The thread starts here:

https://lists.freedesktop.org/archives/wayland-devel/2016-December/032275.html

I need to do some legwork to figure out how difficult of a project it will be.

@dcommander
Copy link
Member Author

After further discussions with the Wayland developers (https://lists.freedesktop.org/archives/wayland-devel/2017-February/033209.html) and poring over the code for the Weston backends (specifically the Wayland and RDP backends), it does indeed seem possible to implement Option 2 as described above, but it's not going to be an easy hack. Some of the challenges with this approach include:

  1. Modifying the Weston GL renderer to support headless operation-- currently it only supports rendering to a visible Wayland display.
  2. Developing a prototype backend (perhaps based on the existing RDP backend) that advertises the aforementioned headless GL renderer to Wayland applications, then performantly manages the movement of pixels between that headless renderer and a framebuffer stored in main memory.
  3. Figuring out the appropriate touch points between the proposed backend and the existing TurboVNC Server code (or any other remote display server code, for that matter.)

At the moment, I don't think it makes much sense to pursue Option 2, because of the following:

  1. Given my background, it would take much less time (=money) to implement Option 1 (the Wayland interposer approach), and part of that approach (Access the GPU without going through an X server VirtualGL/virtualgl#10) would have strategic benefits for the existing VirtualGL GLX interposer.
  2. Wayland/Weston is in a period of rapid development. I lack the expertise with the code and architecture that would be necessary to maintain a hardware-accelerated OpenGL backend as the code evolves, and I do not have the cycles necessary to come up to speed on it in a timely manner.
  3. The Weston developers indicated (https://lists.freedesktop.org/archives/wayland-devel/2016-December/032306.html) that the current nVidia drivers do not support their preferred method of buffer management, and if I understand their comments correctly, this would make Option 2 impossible with those drivers at the moment. This is consistent with the knowledge that, on Fedora 25, you must use the nouveau driver in order to use Wayland on nVidia hardware. However, the interposer approach should still be possible, since it wouldn't interact with Wayland except through wl_shm buffers.
  4. An interposer is much more flexible, since it will work with basically any Weston backend. This would allow other organizations to more rapidly develop their own custom remote display backends, without hardware acceleration, then add hardware acceleration after the fact. It would maintain the neutral position of VirtualGL as a technology for enabling OpenGL hardware acceleration across a wide variety of remote display solutions.
  5. With Option 2, there are likely to be challenges related to reading back the pixels from the GPU into main memory within the compositor without stalling/blocking Wayland applications. However, the performance problems associated with the interposer approach are well-understood.
  6. The Weston developers have expressed interest in implementing a hardware-accelerated headless remote backend for testing purposes, so I think it makes sense to work toward the interposer approach in the near term and revisit Option 2 once they have implemented this interposer. It makes more sense for them to develop and maintain that backend than for me to do so.

There are definitely advantages to Option 2. It would eliminate the need for a vglrun-like wrapper script. It would potentially be much easier to integrate with NVENC. It would automatically run compositing window managers automatically. It wouldn't experience some of the esoteric issues that are sometimes encountered when using LD_PRELOAD. However, the disadvantages outweigh the advantages at the moment. I feel that Option 2 is where this stuff will eventually end up, but we need a stopgap measure to support running Wayland 3D applications remotely with hardware acceleration on the nVidia proprietary drivers, and the only way to do that right now would be with an interposer.

@q2dg
Copy link

q2dg commented Jul 26, 2017

Hello.
Have you already decided on this topic?
Thanks!

@dcommander
Copy link
Member Author

Not really. The Wayland EGL interposer has a project dependency within VirtualGL (VirtualGL/virtualgl#10, accessing the GPU through EGL rather than GLX and thus eliminating the need for a 3D X server), and I intend to implement that dependency this year. After that, implementing a Wayland EGL interposer for testing purposes would probably be a simple matter. However, I also saw where the Wayland developers just introduced a set of patches for adding experimental hardware-accelerated OpenGL support to the RDP back end, so Option 2 may bear fruit before the EGL interposer (Option 1) does.

From a product point of view, none of this will become a major concern until Wayland makes its way into one of the enterprise Linux distributions (RHEL or Ubuntu LTS) and until nVidia's EGL implementation fully supports it. So anything I did at the moment would be purely for testing purposes.

@cromefire
Copy link

@dcommander Ubuntu artful (17.10) is now using Wayland and it's just a matter of time till it gets into bionic (18.04), which is a LTS version so time is coming

@dcommander
Copy link
Member Author

Yes, at the moment I'm still waiting to see how nVidia and Red Hat support it, because that's going to be the driver for high-end 3D applications.

@q2dg
Copy link

q2dg commented May 5, 2018

Well, 18.04 has arrived already and Xorg still remains by default
In https://insights.ubuntu.com/2018/01/26/bionic-beaver-18-04-lts-to-use-xorg-by-default there are exposed the reasons of this decision. One of them is "Remote Desktop control for example RDP & VNC works well under Xorg." So it seems it's a chicken-egg problem. Somebody must get the initiative.

@cromefire
Copy link

Well it at least seems that the GNOME/Wayland developers are working on this

@dcommander
Copy link
Member Author

RHEL 8 will perhaps be a better litmus test, since they have already switched to Wayland by default in Fedora. To the best of my understanding, a big limiting factor here is still nVidia driver support. At least the last time I checked, you couldn't use Wayland with the nVidia proprietary drivers. You had to use nouveau.

@q2dg
Copy link

q2dg commented Feb 12, 2019

RHEL 8 Beta has arrived (three months ago): https://developers.redhat.com/blog/2018/11/15/red-hat-enterprise-linux-8-beta-is-here/

@dcommander
Copy link
Member Author

Yes, I know. I'm looking at it now.

@q2dg
Copy link

q2dg commented May 12, 2020

Hello.
Any news on this?
Thanks!

@dcommander
Copy link
Member Author

Right now, I am focused on intermediate steps, such as developing an EGL back end for VirtualGL and adding systemd support to TurboVNC, that will pave the way for Wayland support in the long term. Ultimately this feature will become high-priority when commercial 3D application vendors start adopting Wayland and those applications stop working in TurboVNC, but that shows no signs of happening anytime soon. Scientific/technical computing ISVs and users tend to be very slow adopters, and they will probably continue to use X11 as long as they are able to use X11. I predict that this will become an issue as soon as GUI frameworks like Qt and GTK, and the window managers built upon those frameworks, stop supporting X11, but I have no idea when that will happen. Large enterprises are only just now switching from RHEL 6 to RHEL 7, and they will have to switch to RHEL 8 before they can even run Wayland applications.

From my point of view, Wayland is still a moving target. The underlying assumption was that I could base a TurboVNC Server implementation around a common Wayland compositor code base that would serve the same purpose that the X.org code base currently serves in the TurboVNC Server. However, that's not really how Wayland works. I would need to base a hypothetical Wayland TurboVNC Server implementation on Weston, and Weston is undergoing such rapid and bleeding-edge development right now that I can't even build that code base on RHEL 8 unless I check out version 6.0.1, which is nearly a year old. Thus, another factor that probably needs to converge here is Red Hat adopting Weston in some capacity, so an "enterprise stable" version of that code base could serve as the basis for a TurboVNC Server implementation.

The needs of large-scale enterprises have primarily driven the development of TurboVNC and VirtualGL over the years. One of my enterprise customers has identified Wayland support as a long-term goal, but it's still low-priority at the moment. They have a lot of other long-term goals, and because funding is limited, we're focusing on projects that have the most "bang for the buck" in the short term. Wayland is, like other technologies we're looking at for long-term deployment (WebAssembly, for instance), still in the "wait and see" phase. Some people are using it, but most of the people who have most of the money aren't (yet). I'm also fighting a constant battle to maintain TurboVNC's niche, given that other products have been aggressively going after my customer base for some years, oblivious to the fact that-- if TurboVNC goes away-- so does VirtualGL (which those other products rely upon.) So I have enough to deal with just trying to remain afloat-- both strategically and financially. I can't spend much time right now looking at speculative stuff.

@any1
Copy link

any1 commented Jul 9, 2020

It would be nice to have a wayland native vncviewer. We have a server now for wlroots based compositors: https://github.com/any1/wayvnc. I was going to see if I could add it myself, but all this Java stuff is a bit too much to handle.

Remmina works, but it's pretty slow compared to TurboVNC which seems to be "the fastest VNC client in the west", even via xwayland. Running via XWayland tends to be glitchy and the GUI doesn't integrate well with sway.

@eero-t
Copy link

eero-t commented Dec 4, 2020

Weston supports now HW accelerated headless mode with: --backend headless-backend.so --use-gl

Wayvnc doesn't work with Weston:

wl_registry@2: error 0: invalid version for global zxdg_output_manager_v1 (4): have 2, wanted 3
ERROR: Virtual Pointer protocol not supported by compositor.

(IMHO Weston is nicest compositor for Wayland development because it can run in a window on another Wayland or X11 display, which makes debugging things much easier.)

@any1
Copy link

any1 commented Dec 4, 2020

I fail to see how any of this is relevant to this "issue". Please, excuse me for feeding into off-topic discussion, but I'd like to set a few things straight...

Weston supports now HW accelerated headless mode with: --backend headless-backend.so --use-gl

Sway and other wlroots based compositors have had this for a long time

Wayvnc doesn't work with Weston:

Wayvnc is a VNC server for wlroots based compositors. It states so in the first sentence in the first paragraph of the README on the landing page of the github project, and probably other places as well. Weston is not wlroots based.

(IMHO Weston is nicest compositor for Wayland development because it can run in a window on another Wayland or X11 display, which makes debugging things much easier.)

You can also do this with Sway.

@eero-t
Copy link

eero-t commented Dec 4, 2020

I fail to see how any of this is relevant to this "issue".

  • Earlier comments discussed Weston and its offscreen support, so it seemed relevant that those are nowadays supported
  • Lack of virtual pointer support (in latest Weston release) is likely relevant also for TurboVNC

You can also do this with Sway

Great, thanks!

(I hadn't found any mentions of Sway support for HW accelerated offscreen, or running in a window with quick Google search, they aren't mentioned in Sway Fedora manual page, nor in its README.md or Wiki: https://github.com/swaywm/sway/wiki)

I see now that I should have checked docs for wlroots underlying it instead: https://github.com/swaywm/wlroots/blob/master/docs/env_vars.md

Using options from there, Sway & WayVNC (in Fedora 33) indeed work fine in HW accelerated headless mode!

@dcommander
Copy link
Member Author

The fact that there are multiple competing compositors just underscores my opinion that this is a rapidly-evolving field with insufficient technological convergence to be actionable from TurboVNC's point of view. VNC itself wasn't even invented until X Windows was 15 years old and had achieved broad acceptance and convergence. That tends to be the way of things-- you can't really build upon a piece of infrastructure that's a moving target.

It is nice to know, however, that the concept has been proven. Perhaps it would be fruitful to look at accelerating NeatVNC, upon which WayVNC is based, using the TurboVNC encoding methods.

@any1
Copy link

any1 commented Dec 4, 2020

The fact that there are multiple competing compositors just underscores my opinion that this is a rapidly-evolving field with insufficient technological convergence to be actionable from TurboVNC's point of view. VNC itself wasn't even invented until X Windows was 15 years old and had achieved broad acceptance and convergence. That tends to be the way of things-- you can't really build upon a piece of infrastructure that's a moving target.

Well, the interface for creating a windows and submitting buffers to them is pretty stable and Wayland support is implemented in Gtk, Qt, SDL, glfw and more. A regular window application that works on one Wayland compositor will work on the others too.

Some protocols which are relevant to VNC servers that have not been agreed upon between different compositors are:

  • Screen capturing
  • Virtual mouse
  • Virtual keyboard
  • Clipboard management

Protocols that are missing:

  • Mouse cursor capturing
  • Multi-seat management

It is nice to know, however, that the concept has been proven. Perhaps it would be fruitful to look at accelerating NeatVNC, upon which WayVNC is based, using the TurboVNC encoding methods.

As you've already mentioned (5 years ago), h.264 encoding would lend itself pretty well to Wayland. Doing hardware encoding would be pretty ideal because it means that we can leave the buffers on the GPU until they've been encoded. This is not yet implemented in NeatVNC. We'll need to implement it for some clients as well.

@dcommander
Copy link
Member Author

As an independent software developer, I have to go where the money is, and the industries that financially sponsor most of my work on this project are always at the tail end of the curve, not the front end. This will only become a high priority for them (and, by extension, me) when commercial applications (and the GUI frameworks on which they are built) stop supporting X11 altogether. As long as Qt, GTK, etc. have dual X11 and Wayland support, applications that use those frameworks should continue to work with TurboVNC, even if they use Wayland on the local display. Commercial ISVs cannot, at this point, choose to support only Wayland, because there are still enterprise/LTS Linux distributions in active support that don't have Wayland capabilities at all. That's why I feel that things will begin to move on this front once large enterprises start moving from RHEL 7 to RHEL 8, which won't happen for a few years.

Referring to #19, there are potential ways that VirtualGL might be able to hand off a GPU buffer to the TurboVNC Server for encoding, but I agree that GPU-based encoding does lend itself more to Wayland than to a virtual X server.

@eero-t
Copy link

eero-t commented Dec 7, 2020

Referring to #19, there are potential ways that VirtualGL might be able to hand off a GPU buffer to the TurboVNC Server for encoding, but I agree that GPU-based encoding does lend itself more to Wayland than to a virtual X server.

Intel added support for lossless render buffer compression (RBC) in GEN9 (Skylake etc), and support for having it compatible between media and 3D pipelines in later generations. Nvidia HW has supported RBC longer, but I'm not sure how well its drivers support Linux buffer modifiers [1], like Mesa does. And I don't whether there are any issues with the Mesa AMD and ARM GPU support, or how well their HW supports RBC.

[1] In the latest X server release, user needs to enable (buffer) modifier support with X config ("dmabuf_capable") debug flag, whereas it's enabled by default in the X server Git version. [2]

When there's no modifier support, GPU driver needs to resolve (uncompress) compressed render buffers when they're passed to another process (= do extra blit where write is uncompressed). Especially for content with large areas of uniform color, like is case with many desktop applications, GPUs' lossless compression could provide significant memory bandwidth improvement.

At least in Weston case, there's been no need for that extra resolve, since support for modifiers was initially added to kernel+Mesa in late 2017 [2], but I guess it's the same also with other Wayland compositors?

[2] X server modifiers support got to working state for Intel HW around May 2018, just when the last X release was done, but I guess support was still buggy for other GPUs, and that's why it wasn't enabled by default. :-/

@dcommander
Copy link
Member Author

@eero-t I don't understand how that's relevant to the topic at hand. Can you explain?

@eero-t
Copy link

eero-t commented Dec 7, 2020

I don't know enough of VirtGL and its integration with TurboVNC to comment on that, but AFAIK application window buffers can be accessed bandwidth-wise as optimally with X server as with Wayland, for potential video/JPEG compression [1] done on GPU.

However, as I commented, one needs to use either X server git version, or enable dmabuf_capable debug option, whereas with Wayland things work already in released versions.

What I don't know, is whether media drivers allow passing (3D HW compressed) buffers with modifiers to them yet.

[1] Besides normal video formats, Intel media driver supports JPEG compression done on GPU (from GEN9 onwards), maybe other GPUs support that too.

(Hm. Maybe this would have been more relevant for #19.)

@dcommander
Copy link
Member Author

VirtualGL currently integrates with TurboVNC (and most other X proxies, for that matter) via the MIT-SHM X extension. Enabling GPU-based compression would require either:

  1. passing the compressed images to TurboVNC using some other X11 extension. (Perhaps that's what you're proposing, but what extension would enable that? Glamor? DRI3? Bear in mind that I understand very little about either, and my understanding of Wayland compositor plumbing is similarly lacking.)

    ...or...

  2. passing a GPU buffer handle to TurboVNC so that it can perform the GPU-based compression itself.

In general, it seems like a less painful way to accomplish GPU-based compression would be to keep the TurboVNC framebuffer in GPU memory rather than CPU memory, but I don't have a sufficient understanding of how X.org interacts with physical hardware to be able to say whether that's even possible (in brief discussions with nVidia, they seemed to think that it wasn't possible with their driver, which is used in the vast majority of commercial VirtualGL+TurboVNC deployments.) If it were possible to get GPU acceleration in TurboVNC that way, then VirtualGL wouldn't even be necessary, and TurboVNC could choose whether to use the CPU or GPU for compression. Perhaps TurboVNC could be based on XWayland and use the Wayland EGL back end?

The issue of how to implement GPU-based compression or VirtualGL-less 3D acceleration in TurboVNC isn't specific to Wayland, but it relates to Wayland in the sense that I can't envision how to make it happen without invoking Wayland somehow. But again, I confess to not having a thorough understanding of this stuff.

@eero-t
Copy link

eero-t commented Dec 7, 2020

I haven't used it myself, but I think DMA-Buf and interfaces built on top of that are what needs to be used for sharing that data:

(Nvidia proprietary blob probably needs separate support.)

@dcommander
Copy link
Member Author

That is all Greek to me at the moment. I need to start with a more high-level understanding of how the various components interconnect.

@eero-t
Copy link

eero-t commented Dec 8, 2020

@adamkrellenstein
Copy link

adamkrellenstein commented Nov 30, 2023

I'd be interested in sponsoring a developer to implement this feature... I'm working on a software project that will depend on remote execution of graphical applications for display with Wayland (streaming the UIs of individual applications, rather than full desktops). No accelerated 3D rendering would be necessary, at least initially. Please shoot me an e-mail (address in GitHub bio) if you might be able and available to work on this.

@dcommander
Copy link
Member Author

Discussions in the KasmVNC (kasmtech/KasmVNC#193) and xrdp (neutrinolabs/xrdp#2637) communities suggest that this is way more complicated than I thought. In particular, the concept of a single Wayland VNC server that can accommodate multiple types of window managers (GNOME, KDE, wlroots, etc.) is likely a lost cause, since the window manager is tied to the compositor at the hip. A more appealing concept might be to design a Wayland compositor from the ground up that has remote display and remote window management (seamless windows) built in and uses an as-yet-to-be-defined legacy-free streaming protocol designed from the ground up to accommodate Wayland. However, that is obviously a huge lift. A more reasonable short-term approach might be to facilitate (via. a VNC server library) incorporating TurboVNC and TigerVNC's technology into various Wayland compositors.

@dcommander
Copy link
Member Author

Referring again to kasmtech/KasmVNC#193, there would be licensing issues with incorporating a GPL-licensed VNC server library into some compositors, as well as incorporating TurboVNC's or TigerVNC's code into neatvnc. Also, a seamless window mode seems less feasible the more I dig into it. A more sane approach might be simply to extend LibVNCServer with the security and performance enhancements from TurboVNC, since the LibVNCServer and TurboVNC Server code bases are similar. That may give us a GNOME/Wayland VNC server for free, since GNOME's remote desktop feature uses LibVNCServer. However, in order to get the full complement of TurboVNC features, including session management, I anticipate the need to build a standalone Wayland TurboVNC server that uses the xdg-desktop-portal extension. tl;dr: Seamless windows don't seem feasible, but the idea of a Wayland TurboVNC Server does. It's going to be a very large effort, though.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants