Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Real world examples for docker #165

Closed
dreamcat4 opened this issue May 7, 2015 · 77 comments
Closed

Real world examples for docker #165

dreamcat4 opened this issue May 7, 2015 · 77 comments

Comments

@dreamcat4
Copy link

Hi.
We need some concrete examples how to use vault for distributing secrets into docker containers. I.e. the simplest way possible, whilst also ensuring best security.

It is also unclear to me whether we can benefit from any docker specific plugins for vault? So that is also an open question also I would be grateful if any 'vault experts' or developers of vault can help answer for us.

Many thanks for any pointers / tips.

@dreamcat4
Copy link
Author

I've also created #164 in relation to Docker.

@sethvargo sethvargo changed the title real world examples for docker Real world examples for docker May 8, 2015
@1N50MN14
Copy link

+1

@voxxit
Copy link

voxxit commented May 21, 2015

Couldn't you simply create a tiny Vault container which has port 8200 mapped to 172.17.42.1 (Docker's internal gateway IP for containers)? Run it, using --memory-swap=-1 to prevent memory from swapping to disk:

docker run -d -p 172.17.42.1:8200:8200 --name vault --memory-swap=-1 vault server -dev

Obviously, this runs the development server by default, and probably not the best way to store your secrets using the "inmem" backend. So, you could then use --volumes-from and store things in a data volume container for the "file" backend and configuration, or simply use --link to link to a running Consul container, postgres, mysql, you name it.

The possibilities are endless.

@dreamcat4
Copy link
Author

Thanks @voxxit !

That is an excellent explanation / starting point for how a new person can introduce themselves to vault on docker. Appreciate it.

@voxxit
Copy link

voxxit commented May 21, 2015

@dreamcat4 no problem. Further, I've gone through some of the examples using a docker-compose.yml template here: https://gist.github.com/voxxit/dd6f95398c1bdc9f1038

Remember -- these are only examples, and should be fully vetted for security and I make no implications that these are the most secure examples out there. Just showing the ropes ;-)

@chiefy
Copy link
Contributor

chiefy commented Jun 9, 2015

👍 Thanks @voxxit

@larryboymi
Copy link

Thanks!

@odigity
Copy link

odigity commented Nov 5, 2015

It would be convenient if the Vault project maintained an official container. I'm sure it would help with new user adoption (like me!) by making it easier to try out Vault (and write "hello world" tutorials).

I appreciate the example, @voxxit, and will likely use your Dockerfile as a starting point in the meantime. But for something as security-sensitive as Vault, I'd feel better using an "official" image maintained by the Vault team directly.

@jefferai
Copy link
Member

jefferai commented Nov 5, 2015

@odigity Part of the reason we haven't done it yet is that we can't produce an official image using the Docker Hub's capabilities, which require that the image is built only with a Dockerfile*. Most of the images out there in the community seem to be using Alpine Linux, which is unnecessary when Vault is statically built and causes problems when Vault is dynamically built...plus contains a package manager and other things that widen its exposure. For a security-conscious product, it feels odd to us to recommend an official image that isn't built in a way that we think has an ideal security stance.

(* I'm a fan of golang-builder and (in theory) gockerize, but you can't build containers on the Docker Hub using these tools.)

We are interested in producing an official image, so we've considered taking the nuclear approach of essentially something like:

  1. Take a busybox image
  2. Use it to download and uncompress the binary release of Vault and check the SHA sum
  3. Delete busybox

We just haven't had the time to actually investigate this, yet. But it's on our radar.

@odigity
Copy link

odigity commented Nov 5, 2015

Thanks for the prompt response.

I'm no Docker expert, but I recall when I first start studying Docker a month ago that it's possible to make an image that is literally nothing but a single binary if there are no external deps (which there aren't for vault's binary).

Is the problem that you can't publish an image to Docker Hub containing a Dockerfile that just uses the COPY command to copy the binary in because Docker Hub runs the Dockerfile themselves, which won't have access to a local file so must use wget (or similar strategy) which would then require wget inside the container?

What about quay.io?

Lastly, if the policies of the current popular / trusted image registries prevent Vault from publishing an image you can be comfortable with, then how about a small mention in the docs (linked from the install and getting started pages) that explains this, then shows a simple <=3 line Dockerfile one can use to do the same thing locally?

Something like:

FROM scratch
COPY vault

Not sure that will work exactly, as I've never built a from-scratch image, but you get the idea.

Another idea: Have HashiCorp run their own image registry. It's open-source and easy to run:

https://github.com/docker/distribution

That would be the best in terms of "trusted images from HashiCorp". You guys do have a lot of projects, atterall, so the cost is amortized over the benefit to all of those projects...

@jefferai
Copy link
Member

jefferai commented Nov 5, 2015

I'm no Docker expert, but I recall when I first start studying Docker a month ago that it's possible to make an image that is literally nothing but a single binary if there are no external deps (which there aren't for vault's binary).

Yep, and that's exactly what we think is appropriate for Vault. (Likely you would use this as a base image, and add SSL CA certificates and other things into the container for your local use). golang-builder and gockerize are both tools to help make this easy.

Is the problem that you can't publish an image to Docker Hub containing a Dockerfile that just uses the COPY command to copy the binary in because Docker Hub runs the Dockerfile themselves, which won't have access to a local file so must use wget (or similar strategy) which would then require wget inside the container?

Basically, yes. We also can't use ADD, because:

If is a local tar archive in a recognized compression format (identity, gzip, bzip2 or xz) then it is unpacked as a directory. Resources from remote URLs are not decompressed.

...and our binaries are compressed, for bandwidth/distribution purposes. Hence my comment about possibly doing something with busybox.

We could of course run our own registry, but that's not where we see the demand. People default to/ask for versions they can use from the Docker Hub. Running your own registry is easy, but it's a lot more work to socialize people to use it, provide a decent browsing interface to link Dockerfiles in the registry with things in GitHub, and so on. There's just not enough demand for that solution.

Also, generally speaking, I don't recommend anyone ever uses any Docker container from the Hub outside of a dev context. Some people are happy with what Docker promises with Trusted Builds, and trust that the Dockerfile linked in the Hub web site is exactly what the container was built from...but with a security-sensitive product like Vault, the security-conscious thing to do is to just build your own container, which some of the tools I mentioned earlier makes it pretty easy to do. Any single breach in Docker's infrastructure, which you have no insight into, can mean someone injecting bad code into builds that are later signed with Docker's key.

I just think it's a fairly unnecessary, easily-avoidable risk for production data, compared to downloading the Dockerfile and just running a build. (This isn't localized paranoia...many companies do this. Over time Docker Inc. may gather the same level of long-term trust as OS vendors, but it's still too young and early.) Especially since in production you're probably going to want to be pulling from your own registry in order to have exact control over image upgrade lifecycle (so that a new version of an image doesn't do something accidentally incompatible without you testing and finding that problem pre-production), so you might as well build the container yourself before pushing it up to your local registry.

Of course, you should probably also make your own build, rather than trust HashiCorp's (signed) build, and before that you should probably examine all sources of all dependencies and Vault itself, and it's turtles all the way down. So you stop wherever you're comfortable and wherever fits into your org's security needs and policies.

@dreamcat4
Copy link
Author

Hi,
This is what I do:

ENV _clean="rm -rf /tmp/* /var/tmp/*"

# Install s6-overlay
ENV s6_overlay_version="1.16.0.0"
ADD https://github.com/just-containers/s6-overlay/releases/download/v${s6_overlay_version}/s6-overlay-amd64.tar.gz /tmp/
RUN tar zxf /tmp/s6-overlay-amd64.tar.gz -C / && $_clean

Because (as previously stated) Docker have been glacial at automatically unpacking from URLs. You should go to Docker issues, state that you guys are part of Vault project and need this feature ASAP. It's holding you up and they've known about for ages already.

In meantime, my only suggestion (other than busybox) would be to start with FROM scratch, then ADD tar, then do the above (untar) your vault binary. Leave tar inside your image or delete it from the subsequent layers. It won't affect the image size either way. Of course maybe you can't delete the tar executable without also having the rm cmd. But you could overwrite the file with an empty 0 bytes one, using a subsequent ADD command, which aught to work fine (leaving an un-usable empty file in it's place) and cost you nothing.

Best you can do on Dockerhub right now and avoids possible busybox vulnerabilities entirely. Just a static tar (or whatever decompression program you decide to rely upon).

How about that then?

@jefferai
Copy link
Member

jefferai commented Nov 5, 2015

You didn't mention which base image you use. If the binaries in s6_overlay aren't static, then it requires a full environment to be around, or at least a statically built busybox.

The issue isn't that we're worried about busybox vulnerabilities specifically, we just don't see a reason for non-essential files to be sitting waiting to be invoked. I would generally be more happy with a user taking a base image of ours that includes Vault and adding s6_overlay (or something similar) on top than us making a container that forces it upon a user.

@dreamcat4
Copy link
Author

The s6_overlay is just an example of a tar file to ADD... you guys would not do that. You would replace that bit with your vault binary. Which is self contained.

The reason not to use busybox is it contains more things than you need. If you went from scratch and just copied only tar or some micro-tar (to just untar and nothing else)... then you would not be bloating your image size by hardly anything in comparison to the current filesize of your statically linked go executable (vault monolithic program). Understood?

@jefferai
Copy link
Member

jefferai commented Nov 5, 2015

But it does require us to build tar and gzip statically, as well as (as you noted above) rm and possibly rmdir and others. Then those have to be distributed somewhere where the Hub can access them and copy them into the container. That's more burden on us.

I'm not concerned with the final size of the image. No matter how we slice it, we can't build a minimal container on the Hub. Anyone wanting a Vault container from the Hub is necessarily going to have to contend with tradeoffs.

@mishak87
Copy link

mishak87 commented Nov 5, 2015

Having official image on hub would be equal to encouraging people to run with scissors.
IMO there is no reason to increase false sense of security for people who don't understand security or docker. There are already images doing that just fine.

It would improve user experience to have official tutorial on how to build vault image.
(Import gpg key, download file, download checksums, verify signed checksums and build docker image etc.) All the data necessary to do this are already available at downloads page. Only thing missing is official example on how to piece it all together.

@odigity
Copy link

odigity commented Nov 5, 2015

Of course, you should probably also make your own build, rather than trust HashiCorp's (signed) build, and before that you should probably examine all sources of all dependencies and Vault itself, and it's turtles all the way down. So you stop wherever you're comfortable and wherever fits into your org's security needs and policies.

Yup, that's exactly right. :)

In my case...

  1. If there were an official Vault container on the Hub, I'd probably be using it right now for playing around with, since vulnerabilities aren't relevant in that stage.

  2. Since there isn't, and the points you made are valid, I intend to build a vault-binary-only image myself from scratch (Docker) and source (Vault), since it seems like it should be a straightforward process I'll only have to do once. Luckily, my sysadmin partner has already set up a registry for me.

  3. That's probably my end-turtle. Too lazy / unskilled in this domain to do checksum / sig stuff. But my partner probably will eventually when we get close to actually productionalizing.

So, that just leaves one unresolved point: A small addition to the docs showing a short set of commands that, after once has already build Vault from source (which is already in the docs), will then result in a from-scratch binary-only Vault container. (And maybe also capture some of the points you've made in this issue for other people with the same questions.)

I'll take notes when I go through this process later. Maybe I can submit a PR.

@jefferai
Copy link
Member

jefferai commented Nov 5, 2015

It would improve user experience to have official tutorial on how to build vault image.
(Import gpg key, download file, download checksums, verify signed checksums and build docker image etc.)

One thing we may do at some point is set up a bash script that can do this for common use cases. As is the case with a lot of open source projects, a lot of it comes down to having time, right now.

@jefferai
Copy link
Member

jefferai commented Nov 5, 2015

  1. Since there isn't, and the points you made are valid, I intend to build a vault-binary-only image myself from scratch (Docker) and source (Vault), since it seems like it should be a straightforward process I'll only have to do once. Luckily, my sysadmin partner has already set up a registry for me.

Check out golang-builder and gockerize as they basically automate this exact workflow.

@odigity
Copy link

odigity commented Nov 5, 2015

Will do.

@odigity
Copy link

odigity commented Nov 5, 2015

Just tried out golang-builder. Failed with:

Error: Must add canonical import path to root package

I'm not fluent in Go, but I believe the problem is clearly described here:

https://github.com/CenturyLinkLabs/golang-builder#canonical-import-path

Is this something that needs to be fixed in the source (by adding the suggested comment), or am I doing something wrong?

(I can provide my implementation details if necessary.)

@odigity
Copy link

odigity commented Nov 5, 2015

Update: Just checked out gockerize:

https://github.com/aerofs/gockerize

I might be missing something, but it seems like it does pretty much the same thing golang-builder does, but has less commits/contributors/docs. Seems like golang-builder is all you need and the better way to go, right?

@jefferai
Copy link
Member

jefferai commented Nov 5, 2015

@odigity Interesting about golang-builder -- I haven't used it in a bit but I don't remember having that issue. It's possible that they changed around how it was performing its builds and that's now required.

Regardless, I just pushed a change to provide a custom import path for the "main" package. Can you test it out?

About gockerize, I haven't actually used it. so I couldn't tell you which is better. golang-builder seems well-supported, though.

@odigity
Copy link

odigity commented Nov 5, 2015

Got your change with a git pull. (I love 2015.)

Got past the last error this time, so that's fixed! Next problem:

Building github.com/hashicorp/vault
go install github.com/hashicorp/vault: build output "vault" already exists and is a directory

I'm guessing the build process is trying to produce a binary called 'vault' in the PWD, which conflicts with the 'vault' subdir at the root of the repo (the PWD).

@chiefy
Copy link
Contributor

chiefy commented Jun 24, 2016

@csawyerYumaed could be wrong, but you're going to need ca-certs installed.

@ColinHebert
Copy link

@jefferai have you considered building the image by having the binary of vault stored in git?

I know it might sound bad, but if you check other builds that are done from scratch (say, ubuntu), that's what they're doing.

See https://hub.docker.com/_/ubuntu/ using https://github.com/tianon/docker-brew-ubuntu-core/blob/f2682b59c32241b97e904af6691e997fa9c79c91/precise/Dockerfile

It's a FROM scratch build, that adds a big tar.gz stored in git that you can actually see here https://github.com/tianon/docker-brew-ubuntu-core/blob/f2682b59c32241b97e904af6691e997fa9c79c91/precise/ubuntu-precise-core-cloudimg-amd64-root.tar.gz

@jefferai
Copy link
Member

I don't really see what one would gain over using a file from releases.hashicorp.com...?

@ColinHebert
Copy link

@jefferai any plan to provide an official docker volume driver (@jfrazelle suggested two options), which would allow creating multiple volumes using different auth token (so my container A do not have access to the same secrets as my container B)

@ColinHebert
Copy link

ColinHebert commented Jun 24, 2016

Well the advantage is to have a nice way to get the official latest version of vault for test/local dev purposes without having to download it and put it in your path. Making it easily distributable.

The other thing is that I can run vault server in production in my docker infrastructure (ECS/Swarm/K8s/other) without having to create my own image. If I map to a hash in particular (not to a tag!) I can make sure I run a known version of vault coming from docker hub. At the moment I have to create my own docker image (which does exactly the same thing), and I would expect other people in other companies to have the same problem.

Just to back what I'm saying, according to https://hub.docker.com/search/?isAutomated=0&isOfficial=0&page=1&pullCount=0&q=vault&starCount=0 there is more that 1M downloads on the most popular image that provides vault (which I'm currently using because there isn't an official one) even the second one by Voxxit has more than 100k downloads. If there was an official one people would want to use it to run vault as a service in docker

@jefferai
Copy link
Member

Official docker volume driver: not presently, no.

My previous response was to this comment:

@jefferai have you considered building the image by having the binary of vault stored in git?

That is a different question from "having an official Docker container".

@ColinHebert
Copy link

Ah sorry I should have been more explicit, regarding having it in git, I meant that as a solution to what seemed to be the current blocker to creating an official docker container.
My understanding from this thread is that the current problem is that solutions to build a docker container rely on pre-existing tools on the image (alpine, docker-nano, etc.).

I was suggesting that following the same pattern as the ubuntu image (which relies on having the binary in git so dockerhub can do the build without additional tooling).

@jefferai
Copy link
Member

At this point I think the blockers are mostly time and resources. We do now have an official Consul container, so much of that work could be reused, but there are other considerations with respect to Vault that need to be worked out, both internally and with Docker, Inc.

@chiefy
Copy link
Contributor

chiefy commented Jun 25, 2016

@ColinHebert CI w/ docker hub can pull down Vault binaries from the CDN and check the sha hash - there should be no reason to commit build artifacts to git IMO.

@ColinHebert
Copy link

@chiefy how do you check the hash? Without having tools within the docker image itself that is.

@chiefy
Copy link
Contributor

chiefy commented Jun 25, 2016

@ColinHebert here's a quick Dockerfile - you'll probably want to verify the GPG sig of the sum file in real life.

@ColinHebert
Copy link

But..., this is alpine based. You can't do that with a FROM scratch image. Earlier in that thread, the reason invoked to not provide an image was the inability to provide an image with vault and only vault (as other binaries and tools are not necessary). The only way to do that is to have a FROM scratch image which means that there is no curl nor shaxxxsum available.

@jefferai
Copy link
Member

Earlier in that thread, the reason invoked to not provide an image was the inability to provide an image with vault and only vault (as other binaries and tools are not necessary). The only way to do that is to have a FROM scratch

This isn't an accurate representation of what was said eight months ago, but regardless, even at that point it was noted that starting with busybox was likely a good idea, not scratch, specifically so useful tools would be available.

@o6uoq
Copy link

o6uoq commented Jun 26, 2016

What's the deal if curl or other tools are installed? The cost to install them is going to be less than the hassle of trying to work around an image / installation without them.

@ColinHebert
Copy link

ColinHebert commented Jun 26, 2016

@jefferai

Most of the images out there in the community seem to be using Alpine Linux, which is unnecessary when Vault is statically built and causes problems when Vault is dynamically built...plus contains a package manager and other things that widen its exposure. For a security-conscious product, it feels odd to us to recommend an official image that isn't built in a way that we think has an ideal security stance.

Followed by

We are interested in producing an official image, so we've considered taking the nuclear approach of essentially something like:

Take a busybox image
Use it to download and uncompress the binary release of Vault and check the SHA sum
Delete busybox

Then

I'm no Docker expert, but I recall when I first start studying Docker a month ago that it's possible to make an image that is literally nothing but a single binary if there are no external deps (which there aren't for vault's binary).

Yep, and that's exactly what we think is appropriate for Vault. (Likely you would use this as a base image, and add SSL CA certificates and other things into the container for your local use). golang-builder and gockerize are both tools to help make this easy.

And

The issue isn't that we're worried about busybox vulnerabilities specifically, we just don't see a reason for non-essential files to be sitting waiting to be invoked. I would generally be more happy with a user taking a base image of ours that includes Vault and adding s6_overlay (or something similar) on top than us making a container that forces it upon a user.

I'm sorry, I might be missing the point in the conversation where starting with busybox was a better solution.

Either way, personally I think either is fine and they're also both technically doable.

@o6uoq, the initial points made by Jeff are still valid, and depending on how you want to do things, being in the position where the docker image is (all layers combined) only the vault binary is probably a good thing.
This way you do not have to manage CVE for the underlying image; you do not have to "risk" pointing to a base image that could be somehow poisoned, you don't need to ask people to upgrade their Vault setup because something went wrong with one of the tools provided by Alpine, etc.
There is a little bit of effort (initial setup of the build pipeline) needed in order to ensure that you're building your own image from scratch with a single binary. But I totally get that it would be a good thing for some people.

I'm not in that case at all, this extra security is not something I'm after at the moment, but I understand that it's a valid argument.

@jefferai
Copy link
Member

On Jun 25, 2016 20:44, "Colin Hebert" notifications@github.com wrote:

I'm sorry, I might be missing the point in the conversation where
starting with busybox was a better solution.

I'm not really sure what's unclear about this, which you yourself quoted:

We are interested in producing an official image, so we've considered
taking the nuclear approach of essentially something like:
Take a busybox image
Use it to download and uncompress the binary release of Vault and check
the SHA sum
Delete busybox

Anyways, you're linking to an 8 month old discussion here and it's mostly
not relevant any more.

@chiefy
Copy link
Contributor

chiefy commented Jun 26, 2016

@ColinHebert if you absolutely want FROM scratch solution, use something like golang-builder along with CI that can check SHA sums etc. I don't understand what the issue is here - it's very possible to do this?

@csawyerYumaed
Copy link
Contributor

OK I've created https://github.com/csawyerYumaed/vault-docker
it uses vault 0.6.0-rebuild as a git tag to build from.
This includes everything, including setting the IPC_LOCK capability, so you can run locked memory all in docker, all for 31.69 MB docker image. Uses SSL/TLS certificates (assuming you provide them), etc.

Please let me know if I screwed something up.

@jantman
Copy link
Contributor

jantman commented Jul 13, 2016

There's a lot of good information here that would be a great benefit to people who are running a completely Docker-ized infrastructure.

But what about a solution for people like myself, who have mixed infrastructure (on-prem bare iron, on-prem virtualization, on-prem Docker, AWS EC2 instances, ECS containers, and Docker running on EC2 instances), and have an existing Vault cluster running? It seems like a lot of this discussion is written from the standpoint of running Vault in a container and linking it to others. I'm much more concerned about a way to authenticate Docker containers to an existing Vault - i.e. something that would retrieve a (possibly one-time-use) Token from Vault, for every container that's started, and then inject that into the container somehow.

@jefferai
Copy link
Member

This has become kind of a large meta-issue but I'm going to close it as I think two of the main issues have been addressed:

  1. There is now an official Docker container at https://hub.docker.com/_/vault/
  2. Response wrapping provides a good story around getting secrets into Docker containers, whether via the environment or via a bind-mounted filesystem

For further issue(s), please open new ticket(s). Thanks!

@gsurbey
Copy link

gsurbey commented Sep 7, 2016

Also, it's worth checking out Secrets Bridge by the Rancher team:
https://github.com/rancher/secrets-bridge

@taktran
Copy link

taktran commented Sep 8, 2016

The rancher secrets bridge also says:

Read: Do NOT use for production

=/

@BookOfGreg
Copy link

  1. Response wrapping provides a good story around getting secrets into Docker containers, whether via the environment or via a bind-mounted filesystem

Old link to response wrapping is gone, this is the new link:
https://www.vaultproject.io/docs/concepts/response-wrapping.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests