New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove lxc exec driver #5797
Remove lxc exec driver #5797
Conversation
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
This is not needed/used anymore without the lxc driver Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
This is not needed anymore, only the lxc driver used it. Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
Docker-DCO-1.1-Signed-off-by: Alexander Larsson <alexl@redhat.com> (github: alexlarsson)
+1. Since we finally added namespaces to libcontainer, there really is no need to use LXC. |
On Wed, May 14, 2014 at 06:57:38AM -0700, Alexander Larsson wrote:
I know nothing about the code complexity of the native driver or the |
@wking The reason that the native driver was written was so that we can interface directly with the |
@wking Fundamentally, a lot of the things we do and want to do require more flexibility than a template setup can support. Especially since we have so very little control of what is inside the container. (I.e. the way you set up thing in the container with lxc is to use triggers that run some script, but with a docker image there might not even be a shell in the container). So, i don't think using the current lxc is possible. |
The LXC driver has user namespaces implemented and libcontainer does not. In my opinion I would not remove the LXC driver until libcontainer has support for it. |
@alexlarsson We have a very simple implementation of user namespaces using the LXC driver without support for shared volumes at https://github.com/tutumcloud/docker/tree/userns |
Ping @shykes @crosbymichael |
@fermayo On a side note, have you thought about contributing your lxc user namespace patch? We cannot support features in a fork. I also agree that we need this support in libcontainer, we have a few ppl experimenting with it right now. |
@crosbymichael the work I've done in that fork has some big limitations (doesn't support volumes, has UIDs hardcoded and it's not compatible with pre-existing downloaded images as it does the UID/GID translation at |
@alexlarsson do you know if @mrunalp has made any more progress with userns for libcontainer? |
@crosbymichael I am on PTO for a bit. Hoping to get back to it in 2-3 days. Do you have any further comments on the prototype at mrunalp@78cff10? Do you think using SYS_CLONE is okay or we should be going the route of launching another executable that execs the user command? |
@mrunalp i'll take another look and play around again if we fork then exec right after in the process we maybe fine, I'm sure @alexlarsson could help porting everything to raw syscalls ;) |
One thing that the lxc driver has over libcontainer is the flexibility of --lxc-conf which does not currently have a counterpart in libcontainer. |
@cpuguy83 We used to have lots of similar options as --lxc-conf with a generic -o option. However, that was removed (temporarily) to make sure the UI is right for this. So, this lack of flexibility is not inherent to non-lxc backend, but it is rather an accident, and its kind of "cheating" to assign it as an advantage to lxc. In fact, --lxc-conf is part of the problem. We've decided that may not be a good ui, but we still have to keep it for lxc, and if lxc makes it into 1.0 then we have to essentially support it forever. I don't think supporting lxc forever is a great idea, especially with people using it in combination with various --lxc-conf hackery. Keeping such things working over time is going to become increasingly problematic. |
@alexlarsson 100% agree, but we can't remove LXC support until libcontainer has this functionality. |
On Wed, May 14, 2014 at 07:45:32AM -0700, Aleksa Sarai wrote:
And since there are already LXC devs not working on Docker's native Wed, May 14, 2014 at 07:50:25AM -0700, Alexander Larsson:
This sounds more convincing to me. So something like:
Do you have examples of $some_feature and $some_task to fill in that |
@wking Well, for example, we're working on having each container have a long running supervisor process. The docker daemon would talk to this process via beam sockets, and the supervisor would be responsible for spawning and monitoring the main process in the container. Additionally, it will support the supervisor spawning plugin processes at various stages of container creation. For such a setup we would need the supervisor to partially set up the container, then run any arbitrary plugins, and then finish the container setup switching to a non-privileged mode to spawn the container main process. Something like that just is not doable via some scripting hackery. |
@cpuguy83 Just to make it clear, @wking I'm not sure how closely you've followed the discussions around the native backend, LXC and the problems LXC is causing. Are you aware of the countless problems we've discovered lxc was causing? We've had to test various LXC versions with multiple kernels to find the best one (the one which was causing the least troubles) and spend a lot of time debugging problems which were caused by LXC itself. Different versions of LXC were being packaged across operating systems and this was yet another problem. Making sure your software is up to date across Linux distributions requires coordination, dedication and a lot of effort from all parties involved. Libcontainer and the native execution driver can be bundled into Docker, thus eliminating the dependency on LXC. The native execution driver based on libcontainer doesn't represent an "extension" to LXC. The native execution driver is an interface to the exact same kernel features used by LXC. Libcontainer is written in Go. Asking us to "push your extensions upstream to LXC" doesn't make sense because libcontainer and the native execdriver which uses it aren't "plugins", "extensions" or "addons" for LXC. Libcontainer and the native driver based on it are both using features found in the Linux kernel, just like LXC. |
We discussed this and we are not ready to remove the lxc driver right now. However, we need help from the authors or community to maintain this driver if people want to continue to use it in docker. A few requirements for maintaining the driver is to port it from shelling out to If no one steps up with a real commitment ( meaning actual code not just talk ) to support this then we will drop it from the project. As for now we will add a warning saying that this driver is currently unmaintained. |
On Thu, May 15, 2014 at 09:12:42AM -0700, Alexander Larsson wrote:
That makes lots of sense, thanks :) Thu, May 15, 2014 at 09:15:44AM -0700, unclejack:
Not at all ;). In fact, I haven't even had time to upgrade and follow
“This software is full of bugs, lets start over” works sometimes, but
I get this, and I think telling folks that “Docker works best with LXC
When you can factor your problem into layers, I like dependencies. Anyhow, I have a much clearer understanding of why the LXC driver is |
I am real sorry to bring up i386 support, but lxc driver is the only way to use docker 0.11.1 with i386 in my so far brief, but successful testing. I came here from #5242 and I'm glad to see that lxc support is here to stay for now. |
This removes the lxc exec driver and the -lxc-conf option, as well as some now unused things.
This pull request is mostly of a suggestion and request for comments, but i think it makes a lot of sense.
So, Why remove the lxc driver?
At the core, the lxc driver and the native driver do the same thing. They rely on the same kernel features, and set them up in more or less the same way. The only difference is that in the lxc backend this happens by setting up an lxc template file that describes what we want and then asking lxc to do that. The problem with this that whenever we need to do something during container setup that the lxc templates don't support we run into problems (we'll never have the opposite problem, anything lxc adds we can also do in the native driver). Also, we have to maintain two completely different codebases that do the same thing.
Since its hard to keep up maintainership of the lxc backend its already starting to diverge slightly in behaviour. For instance, it does not use the new /dev tmpfs that the native driver does (it can't because it requires custom setup that the templates don't support). It also doesn't create bind-mount target that don't exist, which means that volumes don't work in places like /run or /dev (if the previous bug was fixed).
There is also ongoing work to have a separate supervisor process for each container, and the work that will be involved in this will require various changes in how containers are setup that will be very hard to implement using lxc.
Furthermore, if we remove lxc we remove the requirement that docker has to be statically linked, which makes building docker a lot easier.