-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Real world examples for docker #165
Comments
I've also created #164 in relation to Docker. |
+1 |
Couldn't you simply create a tiny Vault container which has port 8200 mapped to 172.17.42.1 (Docker's internal gateway IP for containers)? Run it, using
Obviously, this runs the development server by default, and probably not the best way to store your secrets using the "inmem" backend. So, you could then use The possibilities are endless. |
Thanks @voxxit ! That is an excellent explanation / starting point for how a new person can introduce themselves to vault on docker. Appreciate it. |
@dreamcat4 no problem. Further, I've gone through some of the examples using a docker-compose.yml template here: https://gist.github.com/voxxit/dd6f95398c1bdc9f1038 Remember -- these are only examples, and should be fully vetted for security and I make no implications that these are the most secure examples out there. Just showing the ropes ;-) |
👍 Thanks @voxxit |
Thanks! |
It would be convenient if the Vault project maintained an official container. I'm sure it would help with new user adoption (like me!) by making it easier to try out Vault (and write "hello world" tutorials). I appreciate the example, @voxxit, and will likely use your Dockerfile as a starting point in the meantime. But for something as security-sensitive as Vault, I'd feel better using an "official" image maintained by the Vault team directly. |
@odigity Part of the reason we haven't done it yet is that we can't produce an official image using the Docker Hub's capabilities, which require that the image is built only with a Dockerfile*. Most of the images out there in the community seem to be using Alpine Linux, which is unnecessary when Vault is statically built and causes problems when Vault is dynamically built...plus contains a package manager and other things that widen its exposure. For a security-conscious product, it feels odd to us to recommend an official image that isn't built in a way that we think has an ideal security stance. (* I'm a fan of We are interested in producing an official image, so we've considered taking the nuclear approach of essentially something like:
We just haven't had the time to actually investigate this, yet. But it's on our radar. |
Thanks for the prompt response. I'm no Docker expert, but I recall when I first start studying Docker a month ago that it's possible to make an image that is literally nothing but a single binary if there are no external deps (which there aren't for vault's binary). Is the problem that you can't publish an image to Docker Hub containing a Dockerfile that just uses the COPY command to copy the binary in because Docker Hub runs the Dockerfile themselves, which won't have access to a local file so must use wget (or similar strategy) which would then require wget inside the container? What about quay.io? Lastly, if the policies of the current popular / trusted image registries prevent Vault from publishing an image you can be comfortable with, then how about a small mention in the docs (linked from the install and getting started pages) that explains this, then shows a simple <=3 line Dockerfile one can use to do the same thing locally? Something like:
Not sure that will work exactly, as I've never built a from-scratch image, but you get the idea. Another idea: Have HashiCorp run their own image registry. It's open-source and easy to run: https://github.com/docker/distribution That would be the best in terms of "trusted images from HashiCorp". You guys do have a lot of projects, atterall, so the cost is amortized over the benefit to all of those projects... |
Yep, and that's exactly what we think is appropriate for Vault. (Likely you would use this as a base image, and add SSL CA certificates and other things into the container for your local use).
Basically, yes. We also can't use ADD, because:
...and our binaries are compressed, for bandwidth/distribution purposes. Hence my comment about possibly doing something with We could of course run our own registry, but that's not where we see the demand. People default to/ask for versions they can use from the Docker Hub. Running your own registry is easy, but it's a lot more work to socialize people to use it, provide a decent browsing interface to link Dockerfiles in the registry with things in GitHub, and so on. There's just not enough demand for that solution. Also, generally speaking, I don't recommend anyone ever uses any Docker container from the Hub outside of a dev context. Some people are happy with what Docker promises with Trusted Builds, and trust that the Dockerfile linked in the Hub web site is exactly what the container was built from...but with a security-sensitive product like Vault, the security-conscious thing to do is to just build your own container, which some of the tools I mentioned earlier makes it pretty easy to do. Any single breach in Docker's infrastructure, which you have no insight into, can mean someone injecting bad code into builds that are later signed with Docker's key. I just think it's a fairly unnecessary, easily-avoidable risk for production data, compared to downloading the Dockerfile and just running a build. (This isn't localized paranoia...many companies do this. Over time Docker Inc. may gather the same level of long-term trust as OS vendors, but it's still too young and early.) Especially since in production you're probably going to want to be pulling from your own registry in order to have exact control over image upgrade lifecycle (so that a new version of an image doesn't do something accidentally incompatible without you testing and finding that problem pre-production), so you might as well build the container yourself before pushing it up to your local registry. Of course, you should probably also make your own build, rather than trust HashiCorp's (signed) build, and before that you should probably examine all sources of all dependencies and Vault itself, and it's turtles all the way down. So you stop wherever you're comfortable and wherever fits into your org's security needs and policies. |
Hi, ENV _clean="rm -rf /tmp/* /var/tmp/*"
# Install s6-overlay
ENV s6_overlay_version="1.16.0.0"
ADD https://github.com/just-containers/s6-overlay/releases/download/v${s6_overlay_version}/s6-overlay-amd64.tar.gz /tmp/
RUN tar zxf /tmp/s6-overlay-amd64.tar.gz -C / && $_clean Because (as previously stated) Docker have been glacial at automatically unpacking from URLs. You should go to Docker issues, state that you guys are part of Vault project and need this feature ASAP. It's holding you up and they've known about for ages already. In meantime, my only suggestion (other than busybox) would be to start with Best you can do on Dockerhub right now and avoids possible busybox vulnerabilities entirely. Just a static tar (or whatever decompression program you decide to rely upon). How about that then? |
You didn't mention which base image you use. If the binaries in The issue isn't that we're worried about |
The The reason not to use busybox is it contains more things than you need. If you went from scratch and just copied only |
But it does require us to build I'm not concerned with the final size of the image. No matter how we slice it, we can't build a minimal container on the Hub. Anyone wanting a Vault container from the Hub is necessarily going to have to contend with tradeoffs. |
Having official image on hub would be equal to encouraging people to run with scissors. It would improve user experience to have official tutorial on how to build vault image. |
Yup, that's exactly right. :) In my case...
So, that just leaves one unresolved point: A small addition to the docs showing a short set of commands that, after once has already build Vault from source (which is already in the docs), will then result in a from-scratch binary-only Vault container. (And maybe also capture some of the points you've made in this issue for other people with the same questions.) I'll take notes when I go through this process later. Maybe I can submit a PR. |
One thing we may do at some point is set up a bash script that can do this for common use cases. As is the case with a lot of open source projects, a lot of it comes down to having time, right now. |
Check out |
Will do. |
Just tried out golang-builder. Failed with:
I'm not fluent in Go, but I believe the problem is clearly described here: https://github.com/CenturyLinkLabs/golang-builder#canonical-import-path Is this something that needs to be fixed in the source (by adding the suggested comment), or am I doing something wrong? (I can provide my implementation details if necessary.) |
Update: Just checked out gockerize: https://github.com/aerofs/gockerize I might be missing something, but it seems like it does pretty much the same thing golang-builder does, but has less commits/contributors/docs. Seems like golang-builder is all you need and the better way to go, right? |
@odigity Interesting about Regardless, I just pushed a change to provide a custom import path for the "main" package. Can you test it out? About |
Got your change with a git pull. (I love 2015.) Got past the last error this time, so that's fixed! Next problem:
I'm guessing the build process is trying to produce a binary called 'vault' in the PWD, which conflicts with the 'vault' subdir at the root of the repo (the PWD). |
@csawyerYumaed could be wrong, but you're going to need ca-certs installed. |
@jefferai have you considered building the image by having the binary of vault stored in git? I know it might sound bad, but if you check other builds that are done from scratch (say, ubuntu), that's what they're doing. See https://hub.docker.com/_/ubuntu/ using https://github.com/tianon/docker-brew-ubuntu-core/blob/f2682b59c32241b97e904af6691e997fa9c79c91/precise/Dockerfile It's a |
I don't really see what one would gain over using a file from releases.hashicorp.com...? |
@jefferai any plan to provide an official docker volume driver (@jfrazelle suggested two options), which would allow creating multiple volumes using different auth token (so my container A do not have access to the same secrets as my container B) |
Well the advantage is to have a nice way to get the official latest version of vault for test/local dev purposes without having to download it and put it in your path. Making it easily distributable. The other thing is that I can run vault server in production in my docker infrastructure (ECS/Swarm/K8s/other) without having to create my own image. If I map to a hash in particular (not to a tag!) I can make sure I run a known version of vault coming from docker hub. At the moment I have to create my own docker image (which does exactly the same thing), and I would expect other people in other companies to have the same problem. Just to back what I'm saying, according to https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=vault&starCount=0 there is more that 1M downloads on the most popular image that provides vault (which I'm currently using because there isn't an official one) even the second one by Voxxit has more than 100k downloads. If there was an official one people would want to use it to run vault as a service in docker |
Official docker volume driver: not presently, no. My previous response was to this comment:
That is a different question from "having an official Docker container". |
Ah sorry I should have been more explicit, regarding having it in git, I meant that as a solution to what seemed to be the current blocker to creating an official docker container. I was suggesting that following the same pattern as the ubuntu image (which relies on having the binary in git so dockerhub can do the build without additional tooling). |
At this point I think the blockers are mostly time and resources. We do now have an official Consul container, so much of that work could be reused, but there are other considerations with respect to Vault that need to be worked out, both internally and with Docker, Inc. |
@ColinHebert CI w/ docker hub can pull down Vault binaries from the CDN and check the sha hash - there should be no reason to commit build artifacts to git IMO. |
@chiefy how do you check the hash? Without having tools within the docker image itself that is. |
@ColinHebert here's a quick Dockerfile - you'll probably want to verify the GPG sig of the sum file in real life. |
But..., this is alpine based. You can't do that with a |
This isn't an accurate representation of what was said eight months ago, but regardless, even at that point it was noted that starting with busybox was likely a good idea, not scratch, specifically so useful tools would be available. |
What's the deal if |
Followed by
Then
And
I'm sorry, I might be missing the point in the conversation where starting with busybox was a better solution. Either way, personally I think either is fine and they're also both technically doable. @o6uoq, the initial points made by Jeff are still valid, and depending on how you want to do things, being in the position where the docker image is (all layers combined) only the vault binary is probably a good thing. I'm not in that case at all, this extra security is not something I'm after at the moment, but I understand that it's a valid argument. |
On Jun 25, 2016 20:44, "Colin Hebert" notifications@github.com wrote:
I'm not really sure what's unclear about this, which you yourself quoted:
Anyways, you're linking to an 8 month old discussion here and it's mostly |
@ColinHebert if you absolutely want |
OK I've created https://github.com/csawyerYumaed/vault-docker Please let me know if I screwed something up. |
There's a lot of good information here that would be a great benefit to people who are running a completely Docker-ized infrastructure. But what about a solution for people like myself, who have mixed infrastructure (on-prem bare iron, on-prem virtualization, on-prem Docker, AWS EC2 instances, ECS containers, and Docker running on EC2 instances), and have an existing Vault cluster running? It seems like a lot of this discussion is written from the standpoint of running Vault in a container and linking it to others. I'm much more concerned about a way to authenticate Docker containers to an existing Vault - i.e. something that would retrieve a (possibly one-time-use) Token from Vault, for every container that's started, and then inject that into the container somehow. |
This has become kind of a large meta-issue but I'm going to close it as I think two of the main issues have been addressed:
For further issue(s), please open new ticket(s). Thanks! |
Also, it's worth checking out Secrets Bridge by the Rancher team: |
The rancher secrets bridge also says:
=/ |
Old link to response wrapping is gone, this is the new link: |
Hi.
We need some concrete examples how to use
vault
for distributing secrets into docker containers. I.e. the simplest way possible, whilst also ensuring best security.It is also unclear to me whether we can benefit from any docker specific plugins for vault? So that is also an open question also I would be grateful if any 'vault experts' or developers of
vault
can help answer for us.Many thanks for any pointers / tips.
The text was updated successfully, but these errors were encountered: