Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Could not run with --xpra on Ubuntu 18.04 | Speed up xpra start #167

Closed
rsuhendro opened this issue Jun 18, 2019 · 31 comments
Closed

Could not run with --xpra on Ubuntu 18.04 | Speed up xpra start #167

rsuhendro opened this issue Jun 18, 2019 · 31 comments

Comments

@rsuhendro
Copy link

It runs fine with --nxagent, xephyr and hostdisplay. X11docker-gui runs fine with kaptain.
Appreciate for any hint.
Thanks

$ x11docker --xpra x11docker/check
x11docker WARNING: User hendro is member of group docker.
That allows unprivileged processes on host to gain root privileges.

x11docker note: Xpra startup is rather slow. For faster startup
with seamless applications, try --nxagent.
If security is not a concern, try --hostdisplay.

x11docker note: Stay tuned, xpra will start soon.

kaptain: Fatal IO error: client killed

@mviereck mviereck added the needinfo Bug descriptions needs more info label Jun 18, 2019
@mviereck
Copy link
Owner

Thank you for the report!

--xpra works well here on Debian buster.
Can you please run again with --xpra and provide the log file ~/.cache/x11docker/x11docker.log at www.pastebin.com?

@totaam
Copy link

totaam commented Jun 18, 2019

x11docker note: Xpra startup is rather slow. For faster startup

@mviereck why is xpra slow to start? (it shouldn't be - if you want faster startup, use Xvfb instead of Xdummy)

@rsuhendro
Copy link
Author

@mviereck, thanks for the quick response. I've put the log file: pastebin.com

@mviereck
Copy link
Owner

@rsuhendro I've found the issue.
x11docker sets xpra option --modal-windows that is not avaliable in old versions of xpra.
It is fixed in x11docker master branch; x11docker checks if the option is available.
You have two possibiliies:

  • Update x11docker to latest master version with x11docker --update-master
  • Update xpra from www.xpra.com to a new xpra version.

@totaam
xpra server takes some time to start up, here about 5 seconds. xpra client is started afterwards and takes a few additional seconds. nxagent for comparision takes about one second.

x11docker already uses Xvfb.

Me (and x11docker) prefer xpra for several reasons, e.g. graphical cipboard support; however, the startup is time consuming.
If you have ideas how to speed up the xpra startup, it is quite appreciated!

xpra server command:

  xpra start :102 --use-display  \
  --start-via-proxy=no \
  --webcam=no \
  --socket-dirs=/home/lauscher/.cache/x11docker/wine-b8e741 \
  --no-daemon --fake-xinerama=no --mdns=no \
  --file-transfer=off --printing=no --notifications=no \
  --start-new-commands=no --dbus-proxy=no --no-pulseaudio \
  --html=off --session-name="wine " --systemd-run=no

xpra client command:

  xpra attach :102  \
  --start-via-proxy=no \
  --webcam=no \
  --socket-dirs=/home/lauscher/.cache/x11docker/wine-b8e741 \
  --title='@title@ [in container]' \
  -z0 --quality 100 \
  --no-speaker --no-pulseaudio \
  --notifications=no \
  --modal-windows=no \
  --dpi=96 --no-clipboard

@mviereck mviereck added bug and removed needinfo Bug descriptions needs more info labels Jun 18, 2019
@rsuhendro
Copy link
Author

Wow, you are GREAT!!! It works. Thanks..

@totaam
Copy link

totaam commented Jun 25, 2019

If you have ideas how to speed up the xpra startup, it is quite appreciated!

With this change: r23025, a default xpra server with only audio turned off and --opengl=noprobe takes about 0.5 seconds to start up on my laptop. That's with an existing vfb display, otherwise starting Xvfb takes about a second, starting Xdummy takes 4 or 5 seconds...
I think half a second is OK.

Things that can slow it down:

  • FYI: with the current development version (upcoming 3.0 release), we enable start-new-commands by default (good that you have it disabled if you don't need it): this loads all the xdg menu data - the first client won't be accepted until the server has finished loading this data (this costs a few seconds)
  • hardware encoders (ie: NVENC): the hardware probing can take a long time with some professional cards (many seconds)
  • audio: you need both microphone=no and speaker=no to turn off the initial audio probing (this can take seconds, it really depends on your gstreamer cache state and what plugins are installed) - I will try to parallelize it

So if you're seeing a startup much slower than this then I am very interested to know why that is.

@mviereck mviereck reopened this Jun 25, 2019
@totaam
Copy link

totaam commented Jun 26, 2019

FYI: I've created an xpra ticket for this issue: faster server startup

mviereck added a commit that referenced this issue Jun 26, 2019
speed up xpra start with improved logfile check #167
@mviereck
Copy link
Owner

mviereck commented Jun 26, 2019

So if you're seeing a startup much slower than this then I am very interested to know why that is.

Thank you that you are looking at this!
Probably it is better to discuss the x11docker xpra setup here, or do you prefer your ticket?

An example command to run xterm from host with xpra:

x11docker --xpra --exe xterm

Generated xpra server command:

  xpra start :102 --use-display \
  --no-daemon \
  --no-speaker --no-pulseaudio --no-microphone \
  --start-via-proxy=no \
  --webcam=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-5c91b1' \
  --fake-xinerama=no \
  --mdns=no \
  --file-transfer=off \
  --printing=no \
  --notifications=no \
  --start-new-commands=no \
  --dbus-proxy=no \
  --html=off \
  --session-name='xterm ' \
  --systemd-run=no

Generated xpra client command:

  xpra attach :102 \
  --no-speaker --no-pulseaudio --no-microphone \
  --start-via-proxy=no \
  --webcam=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-5c91b1' \
  --title='@title@ [in container]' \
  -z0 --quality 100 \
  --notifications=no \
  --modal-windows=no \
  --dpi=96 \
  --no-clipboard

Xvfb is started by x11docker before running xpra server.
x11docker waits for xpra is ready in the server log before it starts xpra client.
I have improved the logfile check yet, now xpra server is available after about 3...4 seconds.
xpra server log:

x11docker [Mi 26. Jun 16:39:09 CEST 2019]: Starting Xpra server
2019-06-26 16:39:10,935 created unix domain socket: /home/lauscher/.cache/x11docker/xterm-8c31ce/buster-102
2019-06-26 16:39:11,038 pointer device emulation using XTest
2019-06-26 16:39:11,206 Warning: no XShm support on display :102
2019-06-26 16:39:11,213 xvfb pid not found
2019-06-26 16:39:13,182 OpenGL is supported on display ':102'
2019-06-26 16:39:13,183  using 'llvmpipe (LLVM 7.0, 128 bits)' renderer
2019-06-26 16:39:13,335 xpra GTK2 X11 version 3.0-r23019 64-bit
2019-06-26 16:39:13,336  uid=1000 (lauscher), gid=1000 (lauscher)
2019-06-26 16:39:13,336  running with pid 7069 on Linux Debian 10 buster
2019-06-26 16:39:13,337  connected to X11 display :102 with 24 bit colors
2019-06-26 16:39:13,429 xpra is ready.
2019-06-26 16:39:14,475 Warning: icon is quite large (92 KB):
2019-06-26 16:39:14,476  '/home/lauscher/.local/share/icons/rosa-icons/72x72/apps/preferences-desktop.svg'
2019-06-26 16:39:14,527 Warning: icon is quite large (238 KB):
2019-06-26 16:39:14,527  '/home/lauscher/.local/share/icons/rosa-icons/72x72/apps/preferences-system-network.svg'
2019-06-26 16:39:15,204 Warning: icon is quite large (48 KB):
2019-06-26 16:39:15,204  '/home/lauscher/.local/share/icons/rosa-icons/32x32/apps/preferences-system.svg'
2019-06-26 16:39:15,240 Warning: icon is quite large (233 KB):
2019-06-26 16:39:15,240  '/usr/share/icons/hicolor/scalable/status/xfce4-power-manager-settings.svg'
2019-06-26 16:39:15,473 Warning: icon is quite large (54 KB):
2019-06-26 16:39:15,473  '/home/lauscher/.local/share/icons/rosa-icons/72x72/apps/preferences-desktop-peripherals.svg'
2019-06-26 16:39:18,386 New unix-domain connection received on /home/lauscher/.cache/x11docker/xterm-8c31ce/buster-102
2019-06-26 16:39:18,392 Handshake complete; enabling connection
2019-06-26 16:39:18,771  mmap is enabled using 256MB area in /run/user/1000/xpra/xpra.0GGn3E.mmap
2019-06-26 16:39:18,774 Python/GTK2 Linux Debian 10 buster x11 client version 3.0-r23019 64-bit
2019-06-26 16:39:18,775  connected from 'buster' as 'lauscher' - 'Lauscher'
2019-06-26 16:39:18,792 setting key repeat rate from client: 500ms delay / 37ms interval
2019-06-26 16:39:18,795 setting keymap: rules=evdev, model=pc105, layout=de
2019-06-26 16:39:18,844 setting keyboard layout to 'de'
2019-06-26 16:39:18,909 New unix-domain connection received on /home/lauscher/.cache/x11docker/xterm-8c31ce/buster-102
2019-06-26 16:39:18,941 waiting for video encoders initialization
2019-06-26 16:39:21,700 watching for applications menu changes in:
2019-06-26 16:39:21,700  '/usr/share/xfce4/applications'
2019-06-26 16:39:21,700  '/usr/local/share/applications'
2019-06-26 16:39:21,700  '/usr/share/applications'
2019-06-26 16:39:21,700  '/usr/share/applications'
2019-06-26 16:39:21,701 6.8GB of system memory
2019-06-26 16:39:21,705  client root window size is 1920x1080 with 1 display:
2019-06-26 16:39:21,706   :0.0 (508x285 mm - DPI: 96x96) workarea: 1871x1044 at 49x36
2019-06-26 16:39:21,706     monitor 2 (344x193 mm - DPI: 141x142)
2019-06-26 16:39:21,747 client @04.999 Xpra GTK2 X11 server version 3.0-r23019 64-bit
2019-06-26 16:39:21,747 client @04.999  running on Linux Debian 10 buster
2019-06-26 16:39:21,753 client @05.002 Attached to socket:///home/lauscher/.cache/x11docker/xterm-8c31ce/buster-102
2019-06-26 16:39:21,755 client @05.003  (press Control-C to detach)
2019-06-26 16:39:21,858 client @05.111 server does not support xi input devices
2019-06-26 16:39:21,859 client @05.112  server uses: xtest

xpra client log:

x11docker [Mi 26. Jun 16:39:13 CEST 2019]: Starting Xpra client
Warning: invalid padding colors specified,
 global name 'PADDING_COLORS' is not defined
 using black
2019-06-26 16:39:16,444 Warning: invalid padding colors specified,
2019-06-26 16:39:16,444  global name 'PADDING_COLORS' is not defined
2019-06-26 16:39:16,444  using black
2019-06-26 16:39:16,745 Xpra GTK2 client version 3.0-r23019 64-bit
2019-06-26 16:39:16,746  running on Linux Debian 10 buster
2019-06-26 16:39:16,747  window manager is 'Xfwm4'
2019-06-26 16:39:16,989 No OpenGL_accelerate module loaded: No module named OpenGL_accelerate
/usr/lib/python2.7/dist-packages/xpra/gtk_common/gtk_util.py:500: GtkWarning: IA__gtk_widget_set_colormap: assertion '!gtk_widget_get_realized (widget)' failed
  window.set_colormap(rgba)
2019-06-26 16:39:17,710 OpenGL enabled with AMD MULLINS (DRM 2.50.0, 4.19.0-5-amd64, LLVM 7.0.1)
2019-06-26 16:39:17,791  keyboard settings: rules=evdev, model=pc105, layout=de
2019-06-26 16:39:17,797  desktop size is 1920x1080 with 1 screen:
2019-06-26 16:39:17,797   :0.0 (508x285 mm - DPI: 96x96) workarea: 1871x1044 at 49x36
2019-06-26 16:39:17,797     monitor 2 (344x193 mm - DPI: 141x142)
2019-06-26 16:39:21,743 enabled fast mmap transfers using 256MB shared memory area
2019-06-26 16:39:21,744 enabled remote logging
2019-06-26 16:39:21,745 Xpra GTK2 X11 server version 3.0-r23019 64-bit
2019-06-26 16:39:21,745  running on Linux Debian 10 buster
2019-06-26 16:39:21,748 Attached to socket:///home/lauscher/.cache/x11docker/xterm-8c31ce/buster-102
2019-06-26 16:39:21,749  (press Control-C to detach)

2019-06-26 16:39:21,857 server does not support xi input devices
2019-06-26 16:39:21,858  server uses: xtest

You could try x11docker yourself. It is just a single bash script and does not need to be installed.
You can download and run it immediately, e.g.:

curl -fsSL https://raw.githubusercontent.com/mviereck/x11docker/master/x11docker | bash -s -- --xpra --debug --exe xterm

This runs latest x11docker master version with xpra and executes xterm from host.
Option --debug shows some additional info and the generated xpra and Xvfb commands.
Option --verbose is quite verbose, including the logs of xpra server and client.
You can find xpraserver.log and xpraclient.log in ~/.cache/x11docker

With this change: r23025, a default xpra server with only audio turned off and --opengl=noprobe takes about 0.5 seconds to start up on my laptop.

The winswitch beta repository currently provides lower version xpra v3.0-r23019.
With this version --opengl=noprobe does not make an obvious difference. What does --opengl=noprobe do? I would not like to drop OpenGL support.

EDIT:
I've checked the time of the xpra client until the client window appears. It takes about 7...8 seconds.
Adding 3...4 seconds for the server and 7...8 seconds for the client sums up to about 11 seconds overall.

@totaam
Copy link

totaam commented Jun 28, 2019

Probably it is better to discuss the x11docker xpra setup here, or do you prefer your ticket?

I don't mind.
FYI: I've got the server startup code down to ~350ms by turning off: opengl probing, html server, audio and xsettings. (details in the xpra ticket)

Your server startup is slow because of the opengl probing. There is a 2 second delay before the line that says OpenGL is supported on display... opengl=noprobe will fix that. (I could backport this change - it's tiny, and arguably fixes something)

The winswitch beta repository currently provides lower version xpra v3.0-r23019.

I will schedule some new beta builds tomorrow.

With this version --opengl=noprobe does not make an obvious difference. What does --opengl=noprobe do? I would not like to drop OpenGL support.

It just prevents the server from doing the opengl probing. It does not change opengl support in the vfb display itself. This data is not actually used for anything directly, it is only shown on the client's session info dialog and included in xpra info and bug reports.

When the client connects, you are seeing some more delays after waiting for video encoders initialization, I assume that this is what it is doing - which is strange because it shouldn't take more than a few hundred milliseconds, even if you had an NVENC capable card that requires loading pycuda (and cuda and nvenc, etc)
Can you run with -d all and post that?

Instead of waiting for the server to show xpra is ready, we could teach the client to persevere and retry connecting a few times until it succeeds. This would allow you to start the client much earlier and have it initialize in parallel with the server.

@mviereck
Copy link
Owner

mviereck commented Jun 28, 2019

Can you run with -d all and post that?

Sure:
xpraserver.log
xpraclient.log

x11docker[291.61]: Xpra server command:
  xpra start :101 --use-display \
  --start-via-proxy=no \
  --microphone=no \
  --notifications=no \
  --pulseaudio=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-36d056' \
  --debug=all \
  --dbus-proxy=no \
  --daemon=no \
  --fake-xinerama=no \
  --file-transfer=off \
  --html=off \
  --opengl=noprobe \
  --mdns=no \
  --printing=no \
  --session-name='xterm ' \
  --start-new-commands=no \
  --systemd-run=no

DEBUGNOTE[291.68]: Xpra client command:
  xpra attach :101 \
  --start-via-proxy=no \
  --microphone=no \
  --notifications=no \
  --pulseaudio=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-36d056' \
  --debug=all \
  --compress=0 \
  --quality=100 \
  --modal-windows=no \
  --dpi='96' \
  --clipboard=no

Switching to python3 and setting --xsettings=no seems to save about 1...2 seconds.
I want to note that there is already a small delay before the first line in the log file appears. Compare:

$ time xpra --version
xpra for python 2.7 is not installed
 retrying with python3
xpra v3.0-r23019

real	0m0,659s
user	0m0,565s
sys	0m0,096s

Instead of waiting for the server to show xpra is ready, we could teach the client to persevere and retry connecting a few times until it succeeds. This would allow you to start the client much earlier and have it initialize in parallel with the server.

That probably helps a lot!

@mviereck
Copy link
Owner

mviereck commented Jun 29, 2019

Accidently the command above was missing --speaker=no --xsettings=no --webcam=no.
New logfiles:
xpraserver.log
xpraclient.log

x11docker[810.98]: Xpra server command:
  xpra start :101 --use-display \
  --start-via-proxy=no \
  --clipboard-direction=both \
  --microphone=no \
  --notifications=no \
  --pulseaudio=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-078e98' \
  --speaker=no \
  --webcam=no \
  --xsettings=no  --debug=all \
  --clipboard=yes \
  --dbus-proxy=no \
  --daemon=no \
  --fake-xinerama=no \
  --file-transfer=off \
  --html=off \
  --opengl=noprobe \
  --mdns=no \
  --printing=no \
  --session-name='xterm ' \
  --start-new-commands=no \
  --systemd-run=no

DEBUGNOTE[811.05]: Xpra client command:
  xpra attach :101 \
  --start-via-proxy=no \
  --clipboard-direction=both \
  --microphone=no \
  --notifications=no \
  --pulseaudio=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-078e98' \
  --speaker=no \
  --webcam=no \
  --xsettings=no  --debug=all \
  --clipboard=no \
  --compress=0 \
  --quality=100 \
  --modal-windows=no \
  --dpi='96'

@totaam
Copy link

totaam commented Jun 29, 2019

I want to note that there is already a small delay before the first line in the log file appears.

Looks like the Debian packages wrongly default to python2:
https://xpra.org/trac/ticket/2343
Thanks for pointing that out.

From your last server log:

  • the server starts logging at:
    2019-06-29 10:10:12,489 get_enabled_encoders(('rencode', 'bencode', 'yaml')) enabled=['rencode', 'bencode']
  • the startup is (mostly) complete at:
    2019-06-29 10:10:16,225 xpra is ready.
  • the client connects much later (6 seconds!):
    2019-06-29 10:10:22,441 New unix-domain connection received on /home/lauscher/.cache/x11docker/xterm-078e98/buster-101

Most of the server delays have been dealt with already in the xpra ticket:
https://xpra.org/trac/ticket/2341
Setting XPRA_UINPUT=0 would save you 0.4s (with xpra trunk only):
http://xpra.org/trac/changeset/23043

This log doesn't show the waiting for video encoders initialization message I was hoping to see. Something must have been done differently.
FYI: in current trunk, the message is now: waiting for initialization thread to complete since it now does more than just codec initialization.

If you are absolutely certain that the connection is going to be using mmap only, you can save quite a lot of CPU time and system memory with:

  • on the server:
    --encodings=rgb --video-encoders=none --csc-modules=none
  • on the client:
    --encodings=rgb --video-decoders=none --csc-modules=none
    This will save even more memory soon:
    https://xpra.org/trac/ticket/2344

2019-06-26 16:39:11,206 Warning: no XShm support on display :102

I know I have asked before (can't find where), but isn't there a way of enabling XShm safely?
Without XShm, the performance is going to suffer.

As for the client:

  • first log appears at:
    2019-06-29 10:10:17,541 get_enabled_encoders(('rencode', 'bencode', 'yaml')) enabled=['rencode', 'bencode']
  • the opengl driver probing takes over 2 seconds on your system:
    2019-06-29 10:10:19,751 OpenGL probe command returned 0 for command=['python3', '/usr/bin/xpra', 'opengl-probe', '-d', 'opengl'] - this may become moot once the client does a parallel start and re-tries to connect, but until then you can force enable opengl or disable it (and lose yet more performance...) - ideally, xpra would cache the opengl probing result:
    https://xpra.org/trac/ticket/2345
  • then it wastes quite a bit of time loading codecs:
    2019-06-29 10:10:20,444 loading codecs
    2019-06-29 10:10:20,749 VideoHelper.init() done
    As per above, most of this can be eliminated using --encodings=rgb etc..
  • the opengl driver takes a bit of time to load (0.3s)
  • and even more to initialize the GL context (0.5s)

The retry to connect feature has a ticket now - should not be too hard:
https://xpra.org/trac/ticket/2346

Not much else that I can see.
It would be interesting to take another look with a newer build and with the tweaks to those command lines.

@mviereck
Copy link
Owner

mviereck commented Jun 29, 2019

Most of the server delays have been dealt with already in the xpra ticket:

I'll be glad to test it as soon as it appears in the winswitch repository. Current test with:

$ xpra --version
xpra for python 2.7 is not installed
 retrying with python3
xpra v3.0-r23019

I know I have asked before (can't find where), but isn't there a way of enabling XShm safely?
Without XShm, the performance is going to suffer.

Maybe I find a way if I investigate further. Docker provides an option --ipc=host that disables entire IPC namespacing. This allows MIT-SHM, but reduces container isolation too much.
I would expect that lsipc -m shows me some information about the X shared memory, but I don't see it. If the shared memory of MIT-SHM has a representation in the file system, I probably could just share it with the container. But currently I don't see an attempt.

If you are absolutely certain that the connection is going to be using mmap only, you can save quite a lot of CPU time and system memory with:
on the server:
--encodings=rgb --video-encoders=none --csc-modules=none
on the client:
--encodings=rgb --video-decoders=none --csc-modules=none

I tried it, but the start takes even longer than before, about 3 seconds more.
This seems to be the point:

2019-06-29 20:48:54,904 get_default_cursor=[960, 540, 16, 16, 7, 7, 1, b'\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\x00\x00\x00\xff\x00\x00\x00\xff\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff\xff', b'']
2019-06-29 20:48:57,811 out(['python3', '/usr/bin/xpra', 'opengl', '--opengl=yes'])=b'Error: cannot handle window transparency\n screen is not composited\nError: cannot handle window transparency\n screen is not composited\nWarning: window 4294967295 changed its transparency attribute\n from False to True, behaviour is undefined\nGLU.version=1.3\nGLX=1.4\naccum-alpha-size=0\naccum-blue-size=0\naccum-green-size=0\naccum-red-size=0\nalpha-size=0\naux-buffers=0\nblue-size=8\nbuffer-size=24\ndepth=24\ndepth-size=0\ndirect=True\ndisplay_mode=ALPHA, DOUBLE\ndouble-buffered=True\ngreen-size=8\nhas-alpha=False\nhas-depth-buffer=False\nhas-stencil-buffer=False\nlevel=0\nmax-viewport-dims=8192, 8192\nmessage=\nopengl=3.1\npyopengl=3.1.0\nred-size=8\nrenderer=llvmpipe (LLVM 7.0, 128 bits)\nrgba=True\nsafe=True\nshading-language-version=1.40\nstencil-size=0\nstereo=False\nsuccess=True\ntexture-size-limit=8192\ntransparency=True\nvendor=VMware, Inc.\nzerocopy=False\n'
2019-06-29 20:48:57,812 err(['python3', '/usr/bin/xpra', 'opengl', '--opengl=yes'])=b''

xpraserver.log
xpraclient.log

x11docker[20:48:50.035]: Xpra server command:
  xpra start :100 --use-display \
  --start-via-proxy=no \
  --csc-modules=none \
  --clipboard-direction=both \
  --encodings=rgb \
  --microphone=no \
  --notifications=no \
  --pulseaudio=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-33cbef' \
  --speaker=no \
  --video-encoders=none \
  --webcam=no \
  --xsettings=no  --debug=all \
  --clipboard=yes \
  --dbus-proxy=no \
  --daemon=no \
  --fake-xinerama=no \
  --file-transfer=off \
  --html=off \
  --opengl=noprobe \
  --mdns=no \
  --printing=no \
  --session-name='xterm ' \
  --start-new-commands=no \
  --systemd-run=no

DEBUGNOTE[20:48:50.217]: Xpra client command:
  xpra attach :100 \
  --start-via-proxy=no \
  --csc-modules=none \
  --clipboard-direction=both \
  --encodings=rgb \
  --microphone=no \
  --notifications=no \
  --pulseaudio=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-33cbef' \
  --speaker=no \
  --video-encoders=none \
  --webcam=no \
  --xsettings=no  --debug=all \
  --clipboard=no \
  --compress=0 \
  --quality=100 \
  --modal-windows=no \
  --dpi='96'

but until then you can force enable opengl or disable it (and lose yet more performance...)

--opengl=yes seems to speed up a few seconds. However, I am not sure if I should set it as default or if there is a stability risk.

With the new delay above the startup of server+client is at about 11 seconds again.

@totaam
Copy link

totaam commented Jun 29, 2019

I'll be glad to test it as soon as it appears in the winswitch repository.

The buildbot has gone offline, I'll get someone to reboot it tomorrow.

I tried it, but the start takes even longer than before, about 3 seconds more.

Ouch!

This seems to be the point:

That's very unlikely. That's the 2 second delay that the "noprobe" opengl patch fixes.
It was already there, and this subcommand is unaffected by any of the encoding / video settings.
The server startup time is not too far from your previous log sample (5 vs 4 seconds).

  • _XPRA_UINPUT_ID lookup happens after 300ms vs 270ms
  • loading of libc.so.6 happens after 1000ms vs 720ms
  • get_default_cursor happens after 1500ms vs 800ms
    The new log sample just seems slower all around, rather than in a specific place.
    Are you sure those two runs are 100% comparable?

I think that the difference comes from the client side. (9 vs 6 seconds)
This sould be fixed by:
https://xpra.org/trac/ticket/2344
What I think is happening here: with the version of xpra you have, when we tell the client not to load some encodings / video decoders, they get loaded anyway but from a different place - one which may turn out to be more expensive.

--opengl=yes seems to speed up a few seconds. However, I am not sure if I should set it as default or if there is a stability risk.

There is a risk of crash if the drivers are buggy. That's why xpra does a test render in a subprocess before it enables opengl - crashes in that subprocess are detected and we disable opengl, details here:
https://xpra.org/trac/ticket/1994
Just like the server opengl probing, executing this subprocess is expensive, which is why I would like to cache the result somehow:
https://xpra.org/trac/ticket/2345

With the new delay above the startup of server+client is at about 11 seconds again.

It is somewhat amusing that the xpra ticket is about shaving off 10ms here and there, but your results are many seconds slower than what I would expect.
We'll definitely be able to get it down to a few seconds.
FYI: there's a new tracker ticket for the client startup time:
https://xpra.org/trac/ticket/2347

@mviereck
Copy link
Owner

mviereck commented Jul 1, 2019

Are you sure those two runs are 100% comparable?

Ooops, I found an important difference: During a kernel update a Xen Hypervisor was re-enabled. I've disabled it and now xpra is faster than before.
Now it takes about 4 seconds for the server and 5 seconds for the client -> 9 seconds overall.

It is somewhat amusing that the xpra ticket is about shaving off 10ms here and there, but your results are many seconds slower than what I would expect.

Yes, it is amusing, and surprising. I don't have a high-end machine, but it is not that bad.
It has 4 cores, 8 GB RAM and an SSD hard disk. There aren't much background processes. If I do nothing, the CPU is nearly idle.

_XPRA_UINPUT_ID

Can x11docker set it to a harmless value to speed up older xpra versions?

@totaam
Copy link

totaam commented Jul 1, 2019

Now it takes about 4 seconds for the server and 5 seconds for the client -> 9 seconds overall.

Parallel start now works:
https://xpra.org/trac/ticket/2346
so this should now be just over 5 seconds overall in your case - which is still a little bit high. My laptop starts the client in 2 seconds, and that's including opengl probing (and just 0.5s with optimized parameters!):
https://xpra.org/trac/ticket/2347#comment:1
Can you post the latest client's -d all output?

Can x11docker set it to a harmless value to speed up older xpra versions?

That won't help. It is the execution of the xprop command that costs time, and with older versions there is no way to skip it.

@mviereck
Copy link
Owner

mviereck commented Jul 1, 2019

Test with xpra v3.0-r23052:
The startup is significantly faster!
2 seconds for xpra server and 4.5 seconds for the client sums up to 6.5 seconds overall.

Parallel start now works:

Please push it to the winswitch beta repository so I can try it out.

Can you post the latest client's -d all output?

xpraserver.log
xpraclient.log

x11docker[09:58:48,980]: Xpra server command:
  xpra start :102 --use-display \
  --csc-modules=none \
  --clipboard-direction=both \
  --encodings=rgb \
  --microphone=no \
  --notifications=no \
  --pulseaudio=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-81f3e4' \
  --speaker=no \
  --start-via-proxy=no \
  --video-encoders=none \
  --webcam=no \
  --xsettings=no  --debug=all \
  --clipboard=yes \
  --dbus-proxy=no \
  --daemon=no \
  --fake-xinerama=no \
  --file-transfer=off \
  --html=off \
  --opengl=noprobe \
  --mdns=no \
  --printing=no \
  --session-name='xterm ' \
  --start-new-commands=no \
  --systemd-run=no

x11docker[09:58:49,058]: Xpra client command:
  xpra attach :102 \
  --csc-modules=none \
  --clipboard-direction=both \
  --encodings=rgb \
  --microphone=no \
  --notifications=no \
  --pulseaudio=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-81f3e4' \
  --speaker=no \
  --start-via-proxy=no \
  --video-encoders=none \
  --webcam=no \
  --xsettings=no  --debug=all \
  --clipboard=no \
  --compress=0 \
  --opengl=auto \
  --quality=100 \
  --modal-windows=no \
  --dpi='96'

@totaam
Copy link

totaam commented Jul 1, 2019

Please push it to the winswitch beta repository so I can try it out.

Pushed builds for Buster, Cosmic and Fedora 30.

For the client startup speed, the main culprits are already recorded in the xpra ticket and most of those will be dealt with before too long.
You got one of the client command line arguments improvements wrong though: you should use video-decoders=none not video-encoders=none, or even both - that won't hurt. This will save you ~200ms.
If you want to save another 200ms, you can now disable the tray menu icons:
http://xpra.org/trac/changeset/23062
With XPRA_MENU_ICONS=0 xpra attach ..
(I will try to find a way to keep them without slowing down the startup quite so much)

I haven't looked at the server startup speed, but since it is now faster than the client, that's less of an issue.
More to come later.

@mviereck
Copy link
Owner

mviereck commented Jul 1, 2019

Pushed builds for Buster, Cosmic and Fedora 30.

Thanks! Now with xpra v3.0-r23061

you should use video-decoders=none not video-encoders=none, or even both
With XPRA_MENU_ICONS=0 xpra attach ..
Parallel start now works:

Surprisingly these changes do not save time. Now the time is at about 7s overall.
The xpra client is started immediatly after the server, the parallel startup works so far.

xpraserver.log
xpraclient.log

x11docker[14:03:22,420]: Xpra server command:
  xpra start :102 --use-display \
  --csc-modules=none \
  --clipboard-direction=both \
  --encodings=rgb \
  --microphone=no \
  --notifications=no \
  --pulseaudio=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-a8a2d3' \
  --speaker=no \
  --start-via-proxy=no \
  --video-decoders=none \
  --video-encoders=none \
  --webcam=no \
  --xsettings=no  --debug=all \
  --clipboard=yes \
  --dbus-proxy=no \
  --daemon=no \
  --fake-xinerama=no \
  --file-transfer=off \
  --html=off \
  --opengl=noprobe \
  --mdns=no \
  --printing=no \
  --session-name='xterm ' \
  --start-new-commands=no \
  --systemd-run=no

x11docker[14:03:22,503]: Xpra client command:
  xpra attach :102 \
  --csc-modules=none \
  --clipboard-direction=both \
  --encodings=rgb \
  --microphone=no \
  --notifications=no \
  --pulseaudio=no \
  --socket-dirs='/home/lauscher/.cache/x11docker/xterm-a8a2d3' \
  --speaker=no \
  --start-via-proxy=no \
  --video-decoders=none \
  --video-encoders=none \
  --webcam=no \
  --xsettings=no  --debug=all \
  --clipboard=no \
  --compress=0 \
  --modal-windows=no \
  --opengl=auto \
  --quality=100 \
  --dpi='96'

Two obvious client delays:

2019-07-01 14:03:29,070 tray icon scaled to 22x22
2019-07-01 14:03:32,201 read_parse_thread_loop starting
2019-07-01 14:03:32,202 processing packet hello
2019-07-01 14:03:32,906 glXMakeCurrent: NULL for xid=0x5600031
2019-07-01 14:03:34,228 check_server_echo(21110009) last=True, server_ok=True (last_ping_echoed_time=21110009)
2019-07-01 14:03:35,690 pointer_modifiers(<Gdk.EventMotion object at 0x7f31443bbc28 (void at 0x263f4a0)>)=((1167, 681), (425, 268), [], []) (x_root=1166.858154296875, y_root=680.7616577148438, window_offset=None)
2019-07-01 14:03:35,692 do_motion_notify_event(<Gdk.EventMotion object at 0x7f31443bbc28 (void at 0x263f4a0)>) wid=1 / focus=None / window wid=1, device=Virtual core pointer, pointer=(1167, 681), relative pointer=(425, 268), modifiers=[], buttons=[]
2019-07-01 14:03:35,692 send_mouse_position(['pointer-position', 1, [1167, 681, 425, 268], [], []]) elapsed=21113476, delay left=-21113460

@totaam
Copy link

totaam commented Jul 1, 2019

The xpra client is started immediatly after the server, the parallel startup works so far.

You should wait for the server to print created unix domain socket: ...
If the socket does not exist when the client first tries to connect, it will exit without retrying. This could theoretically happen if the server takes longer to start than expected.

Two obvious client delays:

  • before read_parse_thread_loop starting, that's just waiting for the server to send its hello.
  • before check_server_echo, there's just nothing happening I think, the client was already active at that point.

New notes, client side:

  • setting XPRA_ICON_OVERLAY=0 xpra attach may save you 100ms, at the cost of removing the xpra logo overlay on all forwarded system trays and window icons, you can also do: xpra attach --env=XPRA_ICON_OVERLAY=0 ...
  • your OpenGL initialization takes a long time! (1.5 seconds from init_opengl(auto) to OpenGL enabled with AMD MULLINS ..) - not much we can do about that
  • setting XPRA_EXPORT_ICON_DATA=0 xpra attach will save you another 100ms, and won't cost you anything since the local connection is fast enough to not need to use client-side icon data

Server side:

  • you're still probing for the _XPRA_UINPUT_ID window property which costs you ~500ms, if setting environment variables before starting the server is too difficult to add, you can also use this form: xpra start --env=XPRA_UINPUT=0 ...
  • I have fixed a start-new-commands bug, costing you quite a lot:
    https://xpra.org/trac/ticket/2341#comment:9
    The rest looks pretty clean, albeit much slower than on my 4 year old laptop!

The latest Buster builds I have just uploaded should work much better for you.

@mviereck
Copy link
Owner

mviereck commented Jul 1, 2019

Great! Much thanks for your effort.

The server startup until xpra is ready take close to 2 seconds now.
The xpra client is started shortly before this and needs about 3.5 seconds.
Oberall the client window appears 5 seconds after starting the server.
That is pretty good and a great improvement compared to 11..14 seconds before.

x11docker now checks the xpra release number. If it is below r23066, it shows:

x11docker note: Xpra startup can be slow. For faster startup
  with seamless applications,   try --nxagent.
  If security is not a concern, try --hostdisplay.
  xpra version v3.0-r23066 and higher starts up faster.

I'll include a recommendation to update from www.xpra.org once it becomes a stable release.

You should wait for the server to print created unix domain socket: ...
If the socket does not exist when the client first tries to connect, it will exit without retrying.

ok, is implemented now. However, this somehow misses the point of waiting for availability. I'd say, the client should also wait for the socket and repeatedly check for it.

if setting environment variables before starting the server is too difficult to add

That is no problem.

Current environment variables for xpra server:

GDK_BACKEND=x11 XPRA_OPENGL_DOUBLE_BUFFERED=1 XPRA_UINPUT=0

Current environment variables for xpra client:

NO_AT_BRIDGE=1 XPRA_MENU_ICONS=0 XPRA_ICON_OVERLAY=0 XPRA_EXPORT_ICON_DATA=0

xpraserver.log
xpraclient.log
(Same commands as in previous post.)


On a complete tangent: Do you have any thoughts on the Xvfb command?

  /usr/bin/Xvfb :103 -screen 0 1920x1080x24 \
  -dpms -s off -retro \
  +extension RANDR +extension RENDER +extension GLX \
  +extension XVideo +extension DOUBLE-BUFFER \
  -extension X-Resource +extension SECURITY +extension DAMAGE \
  -extension XINERAMA -xinerama -extension MIT-SHM \
  -auth /home/lauscher/.cache/x11docker/xterm-3152ed/Xservercookie \
  -nolisten tcp \
  +extension Composite +extension COMPOSITE \
  +extension XTEST -dpi 96

We already talked about MIT-SHM that it would increase performance if I can provide the shared memory to containers.
I am not entirely sure if all other extensions make sense, especially XVdeo, DOUBLE-BUFFER, X-Resource and DAMAGE.

@mviereck mviereck changed the title Could not run with --xpra on Ubuntu 18.04 Could not run with --xpra on Ubuntu 18.04 | Speed up xpra start Jul 1, 2019
@totaam
Copy link

totaam commented Jul 1, 2019

Overall the client window appears 5 seconds after starting the server.

I reckon we could still get this number down to around 2 seconds, but this is a case of diminishing returns and I have to move on to other things.

However, this somehow misses the point of waiting for availability. I'd say, the client should also wait for the socket and repeatedly check for it.

Done:
https://xpra.org/trac/ticket/2346#comment:2

Current environment variables for xpra server:

  • GDK_BACKEND=x11 - no longer needed, but doesn't hurt either - for now anyway
  • XPRA_OPENGL_DOUBLE_BUFFERED=1 - should not be needed
  • XPRA_UINPUT=0 OK

On a complete tangent: Do you have any thoughts on the Xvfb command?

Not really. The preferred default for Xpra is Xdummy so Xvfb is not getting as much testing.
Xpra needs: RANDR, DAMAGE, XTEST and Composite. MIT-SHM is for the XShm acceleration.
Applications started by xpra normally have the libfakeXinerama shared library injected, so I'm not 100% sure if XINERAMA makes a difference - can't hurt to have it.
Many applications will need RENDER and X-Resource, some may use GLX (OpenGL).
That only leaves:

  • XVideo not sure how much use this is without a GPU
  • SECURITY for xauth, doesn't hurt
  • DOUBLE-BUFFER not sure how that interacts with damage - does this affect performance? would be worth looking into

@mviereck
Copy link
Owner

mviereck commented Jul 2, 2019

GDK_BACKEND=x11 - no longer needed, but doesn't hurt either - for now anyway

This to avoid server startup failure on Wayland, compare https://xpra.org/trac/ticket/2243#comment:3

XPRA_OPENGL_DOUBLE_BUFFERED=1 - should not be needed

This fixed an issue in previous xpra versions: https://xpra.org/trac/ticket/1469#comment:8
I keep it for backwards compatibility. However, I just see it should be set for the client instead. I'll fix that.

DOUBLE-BUFFER not sure how that interacts with damage - does this affect performance? would be worth looking into

I am not sure about it, too. For sure it doubles the amount of memory needed. Xorg doku: https://www.x.org/releases/X11R7.7/doc/libXext/dbelib.html

I'm not 100% sure if XINERAMA makes a difference - can't hurt to have it.

XINERAMA is disabled above with -.

Many applications will need RENDER and X-Resource

X-Resource is disabled, too. So it might be better x11docker enables it.

this is a case of diminishing returns and I have to move on to other things.

Of course. Much thanks for everything!

@totaam
Copy link

totaam commented Jul 2, 2019

This to avoid server startup failure on Wayland, compare ...

This bug has been fixed and was only ever present in a very limited set of beta builds. It's best not to force the GDK_BACKEND value because we will one day support the native wayland client fully (looks unlikely for 3.0) and this workaround would force it back to use X11 instead.

XPRA_OPENGL_DOUBLE_BUFFERED
I keep it for backwards compatibility. However, I just see it should be set for the client instead. I'll fix that.

Hmm, be careful with overriding the defaults in xpra: this fix had been backported to all supported versions. (and as usual, never applied to the Debian packages since they're never updated no matter what serious crasher bugs are fixed upstream.. because "stable" or whatever silly excuse they're using to justify this awful mess - end of rant)
The problem is that we're now moving to python3 as the default interpreter, and the code does not enable double-buffering there at the moment (except on MS Windows). So you're making your users run an unsupported configuration... I understand why (because the Debian packages are broken without this fix).
This will be re-tested before the 3.0 release, and hopefully we can enable this everywhere and all will be well:
https://xpra.org/trac/ticket/2350
FYI: sadly, the opengl probe we do (which is the cause of the biggest startup delay in the client in this very ticket) cannot detect when rendering does not hit the screen, only when it is so buggy that it fails completely.. so it doesn't help us there.

X-Resource is disabled, too. So it might be better x11docker enables it.

I was wrong: this is only useful for debugging with xrestop.
So not needed for regular users.

@mviereck
Copy link
Owner

mviereck commented Jul 2, 2019

The problem is that we're now moving to python3 as the default interpreter, and the code does not enable double-buffering there at the moment (except on MS Windows). So you're making your users run an unsupported configuration...

Thank you for pointing that out!
I could check the xpra version and disable the environment variable for new versions.
Would it be safe if I set it for <3.0 only?

This bug has been fixed and was only ever present in a very limited set of beta builds. It's best not to force the GDK_BACKEND value because we will one day support the native wayland client fully (looks unlikely for 3.0) and this workaround would force it back to use X11 instead.

It seems you misunderstand me: The xpra server crashes if GDK_BACKEND=wayland is set, e.g. by the desktop environment.
I am testing the xpra client, too, it partially works on Wayland. I'll report in your bug tracker.

I was wrong: this is only useful for debugging with xrestop.
So not needed for regular users.

Thanks! I'll disable it again.

mviereck added a commit that referenced this issue Jul 2, 2019
@totaam
Copy link

totaam commented Jul 2, 2019

Would it be safe if I set it for <3.0 only?

Sort of. Unfortunately, some distributions have started shipping python3 versions of xpra based on the 2.5.x branch.

The xpra server crashes if GDK_BACKEND=wayland is set, e.g. by the desktop environment.

Ah, gotcha. We will now override it too:
http://xpra.org/trac/changeset/23089
The wayland native server support is even further out:
https://xpra.org/trac/ticket/387

@mviereck
Copy link
Owner

mviereck commented Jul 2, 2019

Sort of. Unfortunately, some distributions have started shipping python3 versions of xpra based on the 2.5.x branch.

I found in https://xpra.org/trac/ticket/1469#comment:10

In 2.1 onwards, we will now use double-buffering by default on all platforms

So x11docker should be safe if it sets XPRA_OPENGL_DOUBLE_BUFFERED=1 for <2.1 only.

But anyway, is there a way to check if xpra uses python3? So far I only find this piece of information in xpra info, but that only works if xpra server is already running. I don't get this info in xpra --version or xpra showconfig.
This might help in future to decide whether xpra client is able to run on Wayland.

@totaam
Copy link

totaam commented Jul 3, 2019

So x11docker should be safe if it sets XPRA_OPENGL_DOUBLE_BUFFERED=1 for <2.1 only.

Yes.

But anyway, is there a way to check if xpra uses python3?

I can't think of one. Something like this would likely be unreliable:

$ head -n 1 /usr/bin/xpra | cut -b 3- | awk '{print $1" --version"}'
/usr/bin/python3 --version
$ /usr/bin/python3 --version
Python 3.7.3

@mviereck
Copy link
Owner

mviereck commented Jul 3, 2019

I can't think of one. Something like this would likely be unreliable:

Yes, it is unreliable. Currently I can install package xpra along with python2-xpra or python3-xpra. /usr/bin/xpra contains #!/usr/bin/python3 for both cases.

Will there be a python2-xpra in stable v3.0 release?
Currently xpra on Wayland is not ready. Could you implement a sort of check for python2/3? If x11docker runs a version not providing this check, it knows that it should not use the Wayland client.
However, a hint in xpra --help for Wayland support could do this job, too.

@totaam
Copy link

totaam commented Jul 3, 2019

/usr/bin/xpra contains #!/usr/bin/python3 for both cases.

This file belongs in the xpra package which is common to both python2-xpra and python3-xpra.
It will try python3 first then re-exec with python2 if that is missing.
Version 2.5 was doing the opposite: trying with python2 first.

Will there be a python2-xpra in stable v3.0 release?

Yes.
Python 2 will only be dropped after the 3.0 LTS release.

Currently xpra on Wayland is not ready.

Yes, developing for wayland is a mess.

Could you implement a sort of check for python2/3? If x11docker runs a version not providing this check, it knows that it should not use the Wayland client.

I don't understand this bit.

@mviereck
Copy link
Owner

mviereck commented Jul 3, 2019

Yes, developing for wayland is a mess.

:-D
I am glad that it already works basically. I did not thought xpra would have reached the current state that fast.

Could you implement a sort of check for python2/3? If x11docker runs a version not providing this check, it knows that it should not use the Wayland client.

I don't understand this bit.

Normally x11docker checks the host environment and installed dependencies like xpra and Xephyr and automatically decides which X server matches the requirements best and should be started.
E.g. xpra is the default for seamless application on X11, alternatively nxagent and other fallbacks are possible.
(Of course, a user can specify an option like--xpra, --nxagent or --xephyr instead.)

If a user wants to run an X application in a pure Wayland environment without Xwayland, xpra will be the best choice. However, x11docker somehow needs to know if the xpra client supports Wayland.


Btw., if you like to, you can use x11docker to test out Wayland setups.
E.g. this will provide you a Weston window and a terminal without X:

x11docker --wayland --weston --exe xfce4-terminal

Or for kwin_wayland:

x11docker --wayland --kwin --exe xfce4-terminal

With Wayland on host, e.g. Gnome3:

x11docker --wayland --hostwayland --exe xfce4-terminal

In those setups I try xpra on Wayland, e.g.

x11docker --xpra --exe xterm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants