Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
First class support for Docker containers #175
Given the current maturity of local Docker solutions, for many projects, developing against a local container offers more benefits than drawbacks. Not only can a developer benefit from having an execution environment that closely matches the production one, it also helps aligning it with the environment on CI which is a common source of pain specially when tuning a build procedure.
Moreover, with the current trend of having many small projects to map the micro services architectures running on production, it becomes very important to improve the on-boarding experience for new hires or existing company personal, allowing to isolate arbitrarily complex build schemes from the peculiarities of personal computers.
Developers that work on different projects, probably based on different technologies, and that see containerization as a means to simplify their development setup effort but won't give up on having a quick feedback loop while developing.
Subjectively it's hard to grasp why a tool specifically designed to simplify the development with container orchestrators doesn't offer great support for their atomic units. It certainly came as a surprise to me since like many others I've meet, we started working with containers locally before stepping into tools like Kubernetes.
@itamarst proposed a clever approach to solve the issue at the lower level, yet without a great UX but something easy enough to mechanise if technically viable.
Building on top of Docker's support for sharing a network namespace among different containers (similar to Kubenertes' Pod concept), it becomes possible to isolate the telepresence VPN setup from the developer's container. This is a great solution since it not only hides the magic via UX, it actually offers guaranties that the proxying setup won't interfere with your own program.
Proof of concept
After putting together a quick test against a kubernetes cluster on AWS and minikube, the approach seems to work reliably and even solves some of the caveats with running the proxying directly on the host machine:
The solution is really simple and elegant, just requires a proper UX to make it useable at the same level as the other approaches.
Some random thoughts about how this could be implemented:
Note that while the proposal mechanizes the proof of concept approach, perhaps a more sensible solution would be to do all the
referenced this issue
Jun 13, 2017
Notes on what gets proxied:
Notes on UX:
Notes on implementation:
About the UX perhaps it should follow other tools that modify a program behaviour like
The container case is a bit weird since it'll have to modify the arguments to inject some stuff for docker but it removes the coupling between
Regarding what to run inside the container I think sshuttle, sshfs/fuse and ssh should run inside the container, this removes almost all requirements from the host system (perhaps even working on plain windows?) which is one of the core benefits of this approach.
Oh, right, sshfs. That's more issues:
Plus, there's environment variables, which in fact was reason why I had
That means we have to have Telepresence run the user container, because it needs to do environment variables and volumes and whatnot.
Current design plan, then:
UX options, where
I guess I'll go with:
Technical risks: mostly that sshuttle doesn't work with Docker somehow. Since @drslump verified this it's probably going to work, but will try it myself too to get and sense, and also see if I can reproduce some of the lack of limitations he described.
Also: exposing service on host to a container, since routing that depends on different ways Docker runs on different platforms.
Testing for risks:
Update: looks like the mac workaround is actually in docs, so it's legit - https://docs.docker.com/docker-for-mac/networking/#per-container-ip-addressing-is-not-possible
Some testing on my side:
With this telepresence image:
and launching with:
In the launched shell:
Same for a remote kubernetes cluster on AWS. Also
It's a bit complex and unless it simplifies mounting on Windows it's probably better to just require
It's possible things like ping would work on Mac, yes - sshuttle uses a different mechanism there than Linux where I tested. But it's also possible you're pinging 10.0.0.2 on your local network, rather than the Kubernetes network.
If you look at sshuttle docs you'll see it only captures TCP and DNS packets on OS X, though perhaps those docs are out of date: http://sshuttle.readthedocs.io/en/stable/requirements.html#client-side-requirements
So what may be happening is that for TCP it gets routed via sshuttle to Kubernetes, and for ICMP (ping) it gets routed to 10.0.0.2 on your local network. Just guessing, of course.
Even if it's there, I'm hesitant to rely on Docker VM continuing to provide fuse, since it's not part of their public API. So will stick to plan of doing it on host, for now.