Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

X11 is horribly insecure, each subuser should get its own X11 server in a container. #31

Closed
timthelion opened this issue Feb 15, 2014 · 11 comments
Milestone

Comments

@timthelion
Copy link
Contributor

The contained X11 servers should be accessible to the main X11 server via xpra.org, VNC, or wayland.

@timthelion timthelion added this to the 1.0 milestone Feb 15, 2014
@timthelion
Copy link
Contributor Author

See #29

@timthelion
Copy link
Contributor Author

Rather than VNC, we will likely use x11 via SSH with x11-security extensions enabled.

@Sepero
Copy link
Contributor

Sepero commented Jun 15, 2014

I've tested running containers via ssh vs docker -e DISPLAY. When it comes to displaying graphics, videos, or flash on youtube, ssh has repeatedly shown noticeably worse performance. Unless there are ways to dramatically improve the connection with ssh, then is it's a poor solution for web browsing in my opinion. One of the primary reasons I choose to use subuser is because it doesn't have the poor performance of ssh.

@Sepero
Copy link
Contributor

Sepero commented Jun 15, 2014

On the other hand, security is a primary goal for subuser. Also, using ssh could allow an admin to start sessions, while multiple restricted users connect into it. These are very good reasons for using ssh.

If using it as a system for multiple sessions, then the name 'subuser' might become a relic. :)

Personally, I want to use docker on a single user machine, and for untrusted apps like skype and flashplayer. So my concerns tend to lean a little heavier towards performance over security. Given that subuser was designed for the purpose of security, ssh (or some other secure solution?) is probably the way to go.

@timthelion
Copy link
Contributor Author

@Sepero luckly, we can always provide both options.

@ToBeReplaced
Copy link

Alternatively, you could experiment with Wayland and running X11 applications inside of XWayland.

@timthelion
Copy link
Contributor Author

@ToBeReplaced I initially got the idea for subuser when I watched a video presentation on wayland and learned about the fact that it allows for x11 server isolation. I think that when wayland gets popular it may be a great solution for this problem. However, x11ssh is here today, and so I want to support that to.

@timthelion timthelion modified the milestones: 0.3, 1.0 Oct 7, 2014
@timthelion timthelion modified the milestones: 0.4, 0.3 Jun 25, 2015
@timthelion timthelion changed the title X11 is horribly insecure, we should invest in a VNC variant. X11 is horribly insecure, each subuser should get it Jun 25, 2015
@timthelion timthelion changed the title X11 is horribly insecure, each subuser should get it X11 is horribly insecure, each subuser should get its own X11 server in a container. Jun 25, 2015
@timthelion
Copy link
Contributor Author

So in order to fix this bug, we need to launch a service (the contained X11 server with an XPRA server or SSH server) when a client(a subuser process) starts and then stop that container when all clients have stopped:

Image

Unfortunately there are two race conditions that a naive implementation faces.

  1. What happens when the service is not running and two clients get created at once? We don't want two instances of the service staring up.

Image

  1. What happens when the last client stops at the same time as a new client is started? We don't want a client which was counting on connecting to a currently running service to get left behind.

Image

So what is the best way of avoiding this kind of race?

@timthelion
Copy link
Contributor Author

One way to avoid such race conditions would be to have a always on subuser daemon. The daemon would be single threaded and read connection and disconnection signals one by one from a socket. Perhaps it would sometimes stop the X11 container for a given subuser and then imediately restart it, but this wouldn't happen very often and wouldn't really matter. The biggest problem with this approach is that always on daemons are evil and should be avoided at all costs because they waste ram and add to runtime system bloat :/.

Image

@timthelion
Copy link
Contributor Author

So rather than using an always on daemon, each subuser application which needs it's own X11 server will get a lock file which records the number of running process. Since the lock file can only be opened by one client at a time, this enforces "single threaded-ness" and therefore does not suffer race conditions.

Image

The logic is that when a client is first created it opens the lock file. Increments the counter. If the counter was zero before the increment it starts the X11 server container. It then connnects to the X11 server container.

When a client closes the X11 server container opens the lock file and decrements the lock file. If the counter is zero after the decrement, then it stops itself. The only problem with the second part of this logic, is that the X11 server container probably has no way of getting informed when when a client container stops. This probably means that we need an extra process outside of the X11 server container running on the host system which is able to monitor the docker daemon or otherwise find out when client containers stop. This adds a bit of extra complexity to the whole system, and probably a bit of overhead as well. But I believe it will be well worth it in the end, because the world really does not need any more always on daemons.

@timthelion
Copy link
Contributor Author

Fixed, yay!!!!

  • Posted from iceweasel via the new xpra X11 bridge!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants