Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Share IPC namespace of X server and container to allow MIT-SHM #257

Closed
mviereck opened this issue Jun 13, 2020 · 8 comments
Closed

Share IPC namespace of X server and container to allow MIT-SHM #257

mviereck opened this issue Jun 13, 2020 · 8 comments

Comments

@mviereck
Copy link
Owner

mviereck commented Jun 13, 2020

Currently x11docker disables X extension MIT-SHM that would allow shared memory between X and its clients.
This can cause some performance loss.

Antoine suggested some possible solutions: https://xpra.org/trac/ticket/2647 (Now Xpra-org/xpra#2647)

With the help of nsenter it might work to run e.g. Xvfb in the same IPC namespace as the container. In that case x11docker could allow MIT-SHM and might get a slight performance boost especially with option --xpra.

@mviereck mviereck changed the title Share IPC namespace X server and container to allow MIT-SHM Share IPC namespace of X server and container to allow MIT-SHM Jun 13, 2020
@matu3ba
Copy link

matu3ba commented Sep 3, 2020

Does this mean that this option "--hostipc sets docker run option --ipc=host. Allows MIT-SHM / shared memory. Disables IPC namespacing." does not work currently or only not with xpra?

From your experience: Could this be the cause for 3-4% CPU usage on idle, when using xpra on top of x11?

@mviereck
Copy link
Owner Author

mviereck commented Sep 3, 2020

If you set option --hostipc then MIT-SHM is enabled. However, using --hostipc is discouraged because it degrades container isolation.

From your experience: Could this be the cause for 3-4% CPU usage on idle, when using xpra on top of x11?

I am not sure. You could just try out with and without --hostipc and look if it makes a noteable difference.
As long as there is no application or window action and no interaction of you, xpra should be close to idle.

@mviereck
Copy link
Owner Author

mviereck commented Sep 4, 2020

I did a check here, xpra is at about 0,3% CPU usage if idle, next to many other background processes.
This doesn't seem to be unusual.
Which xpra version do you use? Here it is xpra v4.1-r27063

@matu3ba
Copy link

matu3ba commented Sep 4, 2020

@mviereck xpra v3.0.9-r26127 from Endeavour repo. That might explain alot.

@mviereck
Copy link
Owner Author

mviereck commented Sep 6, 2020

Some first tests with nsenter show that it is possible to share the same ipc namespace from a container with Xvfb from host. Unfortunately nsenter needs root privileges that might or might not be available.
An alternative would be Xvfb in container and sharing its ipc namespace with the desired container. Either through an image or somehow dynamically created with resources from host.
I am not sure if this is worth the effort. Maybe I should do some performance tests if MIT-SHM makes a significant difference.

@mviereck
Copy link
Owner Author

mviereck commented Jan 12, 2022

Recent beta/master introduces new options to run some X servers in containers of image x11docker/xserver.

The setup consists of three parts:

  • Host with Xorg (or Wayland)
  • Container with X server
  • Container with GUI

This allows to share IPC namespace of X server container with the targeted GUI container.
This in turn allows MIT-SHM.

Current options are --xpra-c, --xpra-c2, --xephyr-c, --xvfb-c. (Naming scheme might change)
Edit: X in container can be enabled with option --xc. Yet supported are --xpra, --xephyr, --xvfb, --wston-xwayland.
A special one is option --xpra2 that runs xpra server in container, but xpra client on host. The other supported options run entirely in container.

A possible issue remains: IPC namespace is not shared with host Xorg to preserve container isolation. This might cause MIT-SHM issues of Xephyr or xpra client. Though, in current test runs no graphical glitches nor error messages has been seen.

@totaam Does the xpra client use MIT-SHM to communicate with the X server it appears on? Can this be influenced with environment variable XPRA_SHM?

@totaam
Copy link

totaam commented Jan 13, 2022

Does the xpra client use MIT-SHM to communicate with the X server it appears on?

xpra uses GTK to mediate most of the communications with the client's display (ie: X11 server).
I believe that GTK falls back gracefully and silently when MIT-SHM is not available, painting windows will be much slower but it will still work.

This should not matter when the client uses OpenGL acceleration: https://github.com/Xpra-org/xpra/blob/master/docs/Usage/Client-OpenGL.md since this will use a different mechanism for uploading texture data to the GPU.
But, with the default opengl=auto mode, xpra doesn't use OpenGL for all windows (ie: transient windows, etc) so those will remain slow.
You could override some of these checks, ie: XPRA_NO_OPENGL_WINDOW_TYPES="" but there are other cases:
https://github.com/Xpra-org/xpra/blob/aa8f907e1486928d285eb1f4f3569e99416d6211/xpra/client/gtk_base/gtk_client_base.py#L1274-L1310

Can this be influenced with environment variable XPRA_SHM?

No.
That's a server side option.

@mviereck
Copy link
Owner Author

mviereck commented Jan 18, 2022

Thank you for the insights!

x11docker introduces two new options --xpra2 and --xpra2-xwayland that circumvent all MIT-SHM issues.
With these options xpra server and Xvfb (or Xwayland) run together in one container.
In a second container the X clients are running.
Both containers share the IPC namespace to allow MIT-SHM inside of the containers.
The xpra client runs on host and accesses MIT-SHM from the host X server. It is connected to the xpra server with a unix socket and shared mmap. This should provide the best performant setup for xpra.

In cases where xpra client or Xephyr run in container with access to host X (that reports to provide MIT-SHM, but is not accessible in container nonetheless due to isolated IPC namespace), containers of x11docker/xserver run a fake MIT-SHM lib with LD_PRELOAD that tells clients that MIT-SHM is not available. This only affects the nested X server accessing host X, but not the clients accessing the nested X server. Code taken from: jessfraz/dockerfiles#359 (comment)

So far, I consider this ticket finally as solved after about 2 years since the first discussion at xpra.org. :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants