Forward ssh key agent into container #6396

Open
phemmer opened this Issue Jun 13, 2014 · 150 comments

Projects

None yet
@phemmer
Contributor
phemmer commented Jun 13, 2014

It would be nice to be able to forward an ssh key agent into a container during a run or build.
Frequently we need to build source code which exists in a private repository where access is controlled by ssh key.

Adding the key file into the container is a bad idea as:

  1. You've just lost control of your ssh key
  2. Your key might need to be unlocked via passphrase
  3. Your key might not be in a file at all, and only accessible through the key agent.

You could do something like:

# docker run -t -i -v "$SSH_AUTH_SOCK:/tmp/ssh_auth_sock" -e "SSH_AUTH_SOCK=/tmp/ssh_auth_sock" fedora ssh-add -l
2048 82:58:b6:82:c8:89:da:45:ea:9a:1a:13:9c:c3:f9:52 phemmer@whistler (RSA)

But:

  1. This only works for docker run, not build.
  2. This only works if the docker daemon is running on the same host as the client.

Β 

The ideal solution is to have the client forward the key agent socket just like ssh can.
However the difficulty in this is that it would require the remote API build and attach calls to support proxying an arbitrary number of socket streams. Just doing a single 2-way stream wouldn't be sufficient as the ssh key agent is a unix domain socket, and it can have multiple simultaneous connections.

@SvenDowideit
Collaborator

I wonder if #6075 will give you what you need

@phemmer
Contributor
phemmer commented Jun 17, 2014

A secret container might make it a little bit safer, but all the points mentioned still stand.

@slmingol

+1 I would find this capability useful as well. In particular when building containers that require software from private git repos, for example. I'd rather not have to share a repo key into the container, and instead would like to be able to have the "docker build ..." use some other method for gaining access to the unlocked SSH keys, perhaps through a running ssh-agent.

@jbiel
Contributor
jbiel commented Aug 2, 2014

+1. I'm just starting to get my feet wet with Docker and this was the first barrier that I hit. I spent a while trying to use VOLUME to mount the auth sock before I realized that docker can't/won't mount a host volume during a build.

I don't want copies of a password-less SSH key lying around and the mechanics of copying one into a container then deleting it during the build feels wrong. I do work within EC2 and don't even feel good about copying my private keys up there (password-less or not.)

My use case is building an erlang project with rebar. Sure enough, I could clone the first repo and ADD it to the image with a Dockerfile, but that doesn't work with private dependencies that the project has. I guess I could just build the project on the host machine and ADD the result to the new Docker image, but I'd like to build it in the sandbox that is Docker.

Here are some other folks that have the same use-case: https://twitter.com/damncabbage/status/453347012184784896

Please, embrace SSH_AUTH_SOCK, it is very useful.

Thanks

Edit: Now that I know more about how Docker works (FS layers), it's impossible to do what I described in regards to ADDing an SSH key during a build and deleting it later. The key will still exist in some of the FS layers.

@arunthampi

+1, being able to use SSH_AUTH_SOCK will be super useful!

@razic
razic commented Sep 3, 2014

I use SSH keys to authenticate with Github, whether it's a private repository or a public one.

This means my git clone commands looks like: git clone git@github.com:razic/my-repo.git.

I can volume mount my host ~/.ssh directory into my containers during a docker run and ssh is all good. I cannot however mount my ~/.ssh during a docker build.

@bruce
bruce commented Sep 13, 2014

πŸ‘ for ssh forwarding during builds.

@SevaUA
SevaUA commented Sep 28, 2014

As I understand this is wrong way. Right way is create docker image in dev machine, and than copy it to docker server.

@slmingol

@SevaUA - no that's not correct. This request is due to a limitation when doing docker build.... You cannot export a variable into this stage like you can when doing a docker run .... The run command allows variables to be export into the docker container while running, whereas the build does not allow this. This limitation is partially intentional based on how dockerd works when building containers. But there are ways around this and the usecase that is described is a valid one. So this request is attempting to get this capability implemented in build, in some fashion.

@bgrant0607 bgrant0607 referenced this issue in kubernetes/kubernetes Sep 30, 2014
Closed

Container downward/upward API umbrella issue #386

@kanzure
kanzure commented Oct 17, 2014

I like the idea of #6697 (secret store/vault), and that might work for this once it's merged in. But if that doesn't work out, an alternative is to do man-in-the-middle transparent proxying ssh stuff outside of the docker daemon, intercepting docker daemon traffic (not internally). Alternatively, all git+ssh requests could be to some locally-defined host that transparently proxies to github or whatever you ultimately need to end up at.

@phemmer
Contributor
phemmer commented Oct 17, 2014

That idea has already been raised (see comment 2). It does not solve the issue.

@clifton clifton referenced this issue in docker/compose Oct 17, 2014
Closed

Allow forwarding SSH agent in fig #551

@nodefourtytwo

+1 for ssh forwarding during builds.

@goloroden

+1 on SSH agent forwarding on docker build

@sabind
sabind commented Nov 19, 2014

+1 for ssh forwarding during build for the likes of npm install or similar.

@paulodeon

Has anyone got ssh forwarding working during run on OSX? I've put a question up here: http://stackoverflow.com/questions/27036936/using-ssh-agent-with-docker/27044586?noredirect=1#comment42633776_27044586 it looks like it's not possible with OSX...

@thomasdavis

+1 =(

@kevzettler

Just hit this roadblock as well. Trying to run npm install pointed at a private repo. setup looks like:
host -> vagrant -> docker can ssh-agent forward host -> vagrant -! docker

@patra04
patra04 commented Jan 2, 2015

+1
Just hit this while trying to figure out how to get ssh agent working during 'docker build'.

@igreg
igreg commented Jan 16, 2015

+1 same as the previous guys. Seems the best solution to this issue when needing to access one or more private git repositories (think bundle install and npm install for instance) when building the Docker image.

@tonivdv
tonivdv commented Jan 29, 2015

I can volume mount my host ~/.ssh directory into my containers during a docker run and ssh is all good.

@razic Can you share how you get that working? Because when I tried that before it did complain about "Bad owner or permissions"

Unless you make sure that all containers run with a specific user or permissions which allows you to do that?

@jfarrell

+1 to SSH_AUTH_SOCK

@md5
Contributor
md5 commented Jan 29, 2015

@tonivdv have a look at the docker run command in the initial comment on this issue. It bind mounts the path referred to by SSH_AUTH_SOCK to /tmp/ssh_auth_sock inside the container, then sets the SSH_AUTH_SOCK in the container to that path.

@KyleJamesWalker

@md5 I assume @razic and @tonivdv are talking about mounting like this: -v ~/.ssh:/root/.ssh:ro, but when you do this the .ssh files aren't owned by root and therefore fail the security checks.

@tonivdv
tonivdv commented Jan 29, 2015

@KyleJamesWalker yup that's what I understand from @razic and which was one of my attempts some time ago, so when I read @razic was able to make it work, I was wondering how :)

@KyleJamesWalker

@tonivdv I'd also love to know if it's possible, I couldn't find anything when I last tried though.

@howdoicomputer

+1 I'm interested in building disposable dev environments using Docker but I can't quite get it working. This would help a lot in that regard.

@atrauzzi

To anyone looking for a temporary solution, I've got a fix that I use which brute forces things in:

https://github.com/atrauzzi/docker-laravel/blob/master/images/php-cli/entrypoint.sh

It's by no means a desirable solution as it requires a whole entrypoint script, but does work.

@tonivdv
tonivdv commented Feb 23, 2015

@atrauzzi interesting approach. For our dev env we build a base image and copy the ssh key directly in it. It has the advantage of not needing to provide it on each run. And every image inheriting from that mage by default has the key in it also. However with our way you cannot share it publicly obviously ;p

@tcurdt
tcurdt commented Feb 23, 2015

+1 this would be great

@atrauzzi

@tonivdv The container that script is for is made and destroyed frequently as it's just a host for CLI tools. You're of course free to only do the operation once. But if someone changes their settings and re-runs a command through the container, it has to be a fresh copy every time.

@tonivdv
tonivdv commented Feb 24, 2015

@atrauzzi I understand. Your approach should be adopted by docker images which could require a private ssh key. For example, a composer image should include your entrypoint script in case of private repos. At least until docker comes with a native solution.

@goris
goris commented Feb 26, 2015

πŸ‘ for ssh forwarding via build

@jessfraz jessfraz added the feature label Feb 26, 2015
@dts
dts commented Mar 1, 2015

Must-have here as well!

@tonivdv
tonivdv commented Mar 11, 2015

@atrauzzi I'm using another approach currently which I really like. It's making a data volume container with the ssh stuff in it. When you want to use you ssh keys into another container I can simply use that with following command:

docker run -ti --volumes-from ssh-data ...

This way you don't have the need to put an entrypoint on each image and it can work with all images.

To create that container I do the following

docker run \
  --name ssh-data \
  -v /root/.ssh \
  -v ${USER_PRIVATE_KEY}:/root/.ssh/id_rsa \
  busybox \
  sh -c 'chown -R root:root ~/.ssh && chmod -R 400 ~/.ssh'

Hope this can help others :)

Cheers

@atrauzzi

@tonivdv - I took my approach because if someone has to add or update SSH settings, they have to be re-imported. The specific container I'm using is one that gets built to run single commands, so every time it runs, it takes the copy to ensure it's up to date.

@tonivdv
tonivdv commented Mar 11, 2015

@atrauzzi Yup I understand. That being said, it's up to the user to maintain it's ssh volume container correctly. He can even use different ones if necessary. And optionally it can be generated on the fly with a script. But I don't think there is one and only good solution. It all depends on the needs. Just wanted to share so others could choose what solution based on their needs. Hope to blog about this soon, and I'll forward to your solution too! Cheers

@atrauzzi

I wouldn't make it a requirement that people running your containers maintain a data-only container full of ssh keys. Seems involved.

@tonivdv
tonivdv commented Mar 11, 2015

@atrauzzi It's true that the volume container must be there, but in your way the user must share it's ssh key upon running too right? So besides to have the need for a ssh volume container the only difference in both solutions from a running point of view is:

docker run ... --volumes-from ssh-data ... php-cli ...

and

docker run ... -v ~/.ssh:/path/.host-ssh ... php-cli ..

right? Or am I missing something else :)

But I completely get why you are doing it your way. However, should you want to use e.g. a composer image from someone else, the volumes-from way will work out of the box. At least it avoids to create your own image with the "entrypoint hack".

As I said, both are a work around and both have pros and cons.

Cheers

@razic
razic commented Apr 7, 2015

Would be really great to get an update from the Docker team about the status of this feature. Specifically, SSH authentication from docker build.

This is approaching 1 year already. Kinda surprising, given the practicality of real life use cases for this. Currently, we are dynamically generating images by committing running containers. We can't have a Dockerfile in our application's repository. This breaks the flow for practically everything. I can't really use my application with any Docker services like Compose or Swarm until this is solved.

An update would be super appreciated. Please and thank you.

/cc @phemmer

@jessfraz
Contributor
jessfraz commented Apr 7, 2015

It's not that we don't want this feature or anything, I really see a use case for something like this or secrets in build we would just need a proposal from someone willing to implement and then if approved the implementation of the proposal.
Also I speak on behalf of myself not all the maintainers.

@razic
razic commented Apr 7, 2015

@jfrazelle

I know you guys aren't ignoring us :)

So the status is:

It's something we'd consider implementing if there is an accepted proposal
and engineering bandwidth.

Does this sound accurate to you?

Also, are there currently any open proposals that address this issue?

On Tuesday, April 7, 2015, Jessie Frazelle notifications@github.com wrote:

It's not that we don't want this feature or anything, I really see a use
case for something like this or secrets in build we would just need a
proposal from someone willing to implement and then if approved the
implementation of the proposal.
Also I speak on behalf of myself not all the maintainers.

β€”
Reply to this email directly or view it on GitHub
#6396 (comment).

@jessfraz
Contributor
jessfraz commented Apr 7, 2015

It's something we'd consider implementing if there is an accepted proposal
and engineering bandwidth.

Yes

And I do not think there are any open proposals for this.

On Tue, Apr 7, 2015 at 2:36 PM, Zachary Adam Kaplan <
notifications@github.com> wrote:

@jfrazelle

I know you guys aren't ignoring us :)

So the status is:

It's something we'd consider implementing if there is an accepted proposal
and engineering bandwidth.

Does this sound accurate to you?

Also, are there currently any open proposals that address this issue?

On Tuesday, April 7, 2015, Jessie Frazelle notifications@github.com
wrote:

It's not that we don't want this feature or anything, I really see a use
case for something like this or secrets in build we would just need a
proposal from someone willing to implement and then if approved the
implementation of the proposal.
Also I speak on behalf of myself not all the maintainers.

β€”
Reply to this email directly or view it on GitHub
#6396 (comment).

β€”
Reply to this email directly or view it on GitHub
#6396 (comment).

@dts
dts commented Apr 7, 2015

I don't know if I'm oversimplifying things, but here is my proposal:

SSHAGENT: forward # defaults to ignore

If set, during build, the socket & associated environment variables are connected to the container, where they can be used. The mechanical pieces of this already exist and are working, it's just a matter of connecting them in docker build.

I do not have any experience working inside the docker codebase, but this is important enough to me that I would consider taking it on.

@razic
razic commented Apr 7, 2015

Great. Where can I find out how to submit a proposal? Is there a
specific guideline or should I just open an issue?

On Tuesday, April 7, 2015, Jessie Frazelle notifications@github.com wrote:

It's something we'd consider implementing if there is an accepted
proposal
and engineering bandwidth.

Yes

And I do not think there are any open proposals for this.

On Tue, Apr 7, 2015 at 2:36 PM, Zachary Adam Kaplan <
notifications@github.com
javascript:_e(%7B%7D,'cvml','notifications@github.com');> wrote:

@jfrazelle

I know you guys aren't ignoring us :)

So the status is:

It's something we'd consider implementing if there is an accepted
proposal
and engineering bandwidth.

Does this sound accurate to you?

Also, are there currently any open proposals that address this issue?

On Tuesday, April 7, 2015, Jessie Frazelle <notifications@github.com
javascript:_e(%7B%7D,'cvml','notifications@github.com');>
wrote:

It's not that we don't want this feature or anything, I really see a
use
case for something like this or secrets in build we would just need a
proposal from someone willing to implement and then if approved the
implementation of the proposal.
Also I speak on behalf of myself not all the maintainers.

β€”
Reply to this email directly or view it on GitHub
#6396 (comment).

β€”
Reply to this email directly or view it on GitHub
#6396 (comment).

β€”
Reply to this email directly or view it on GitHub
#6396 (comment).

@jessfraz
Contributor
jessfraz commented Apr 7, 2015

I mean like a design proposal
https://docs.docker.com/project/advanced-contributing/#design-proposal

On Tue, Apr 7, 2015 at 2:39 PM, Daniel Staudigel notifications@github.com
wrote:

I don't know if I'm oversimplifying things, but here is my proposal:

SSHAGENT: forward # defaults to ignore

If set, during build, the socket & associated environment variables are
connected to the container, where they can be used. The mechanical pieces
of this already exist and are working, it's just a matter of connecting
them in docker build.

I do not have any experience working inside the docker codebase, but this
is important enough to me that I would consider taking it on.

β€”
Reply to this email directly or view it on GitHub
#6396 (comment).

@phemmer
Contributor
phemmer commented Apr 7, 2015

This is a really high level idea, but what if instead of attaching through the docker remote api, docker ran an init daemon, with a bundled ssh daemon, inside the container?

This could be used to solve a number of issues.

  • This daemon would be PID 1, and the main container process would be PID 2. This would solve all the issues with PID 1 ignoring signals and containers not shutting down properly. (#3793)
  • This would allow cleanly forwarding SSH key agent. (#6396)
  • This daemon could hold namespaces open (#12035)
  • A TTY would be created by the daemon (#11462)
  • ...and probably numerous other issues I'm forgetting.
@jessfraz
Contributor
jessfraz commented Apr 7, 2015

you might wanna see #11529 about the
first bullet point

On Tue, Apr 7, 2015 at 2:46 PM, Patrick Hemmer notifications@github.com
wrote:

This is a really high level idea, but what if instead of attaching through
the docker remote api, docker ran an init daemon, with a bundled ssh
daemon, inside the container?

This could be used to solve a number of issues.

  • This daemon would be PID 1, and the main container process would be
    PID 2. This would solve all the issues with PID 1 ignoring signals and
    containers not shutting down properly. (#3793
    #3793)
  • This would allow cleanly forwarding SSH key agent. (#6396
    #6396)
  • This daemon could hold namespaces open (#12035
    #12035)
  • A TTY would be created by the daemon (#11462
    #11462)
  • ...and probably numerous other issues I'm forgetting.

β€”
Reply to this email directly or view it on GitHub
#6396 (comment).

@phemmer
Contributor
phemmer commented Apr 7, 2015

#11529 is completely unrelated to the PID 1 issue.

@jessfraz
Contributor
jessfraz commented Apr 7, 2015

shoot effing copy paste, now i have to find the other again

no it is that one, it fixes the PID 1 zombie things which is what I thought you were referring to but regardless i was just posting as its intestesting is all

@razic
razic commented Apr 7, 2015

@phemmer It sounds like you have the expertise to guide us in making an intelligent proposal for implementation.

It also looks like @dts and I are willing to spend time working on this.

@phemmer and @dts is there any possible way we could bring this discussion into a slightly more real-time chat client for easier communication? I'm available through Slack, Google Chat/Hangout, IRC and I'll download anything else if need be.

@phemmer
Contributor
phemmer commented Apr 7, 2015

@phemmer It sounds like you have the expertise to guide us in making an intelligent proposal for implementation

Unfortunately not really :-)
I can throw out design ideas, but I only know small parts of the docker code base. This type of change is likely to be large scale.

@razic
razic commented Apr 7, 2015

There's been a few proposals in here already:

@phemmer suggested

what if instead of attaching through the docker remote api, docker ran an init daemon, with a bundled ssh daemon, inside the container?

@dts suggested

SSHAGENT: forward # defaults to ignore
If set, during build, the socket & associated environment variables are connected to the container, where they can be used. The mechanical pieces of this already exist and are working, it's just a matter of connecting them in docker build.

@razic suggested

Enable volume binding for docker build.

What we really need at this point is someone to accept one of them so we can start working on it.

@jfrazelle Any idea on how we can get to the next step? Really I'm just trying to get this done. It's clear that there's a bunch of interest in this. I'm willing to champion the feature, seeing it through to completion.

@dts
dts commented Apr 7, 2015

I can be available for a slack/irc/Gchat/etc meeting, I think this will make things a bit easier, at least to gather requirements and decide on a reasonable course of action.

@phemmer
Contributor
phemmer commented Apr 7, 2015

@dts suggested

SSHAGENT: forward # defaults to ignore

This is just an idea on how it would be consumed, not implemented. The "init/ssh daemon" is an idea how it would be implemented. The two could both exist.

@razic suggested

Enable volume binding for docker run.

Unfortunately this would not work. Assuming this meant docker build, and not docker run, which already supports volume mounts, there client can be remote (boot2docker is one prominent example). Volume binds only works when the client is on the same host as the docker daemon.

@jessfraz
Contributor
jessfraz commented Apr 7, 2015

@razic please see this link about the design proposal... those are not proposals https://docs.docker.com/project/advanced-contributing/#design-proposal

@razic
razic commented Apr 7, 2015

@phemmer

I'm failing to understand exactly why this can't work. docker-compose works with volume mounts against a swarm cluster. If the file/folder isn't on the host system, it exerts the same behavior as if you ran -v with a path that doesn't exist.

@razic
razic commented Apr 7, 2015

@jfrazelle Got it.

@phemmer
Contributor
phemmer commented Apr 7, 2015

If the file/folder isn't on the host system, it exerts the same behavior as if you ran -v with a path that doesn't exist on a local docker.

I'm not sure I follow your point. How does that behavior help this issue?
If I have an ssh key agent listening at /tmp/ssh-UPg6h0 on my local machine, and I have docker running on a remote machine, and call docker build, that local ssh key agent isn't accessible to the docker daemon. The volume mount won't get it, and the docker build containers won't have access to the ssh key.

From a high level, I see only 2 ways to solve this:

1. Proxy the ssh key agent socket:

The docker daemon creates a unix domain socket inside the container and whenever something connects to it, it proxies that connection back to the client that is actually running the docker build command.

This might be difficult to implement as there can be an arbitrary number of connections to that unix domain socket inside the container. This would mean that the docker daemon & client have to proxy an arbitrary number of connections, or the daemon has to be able to speak the ssh agent protocol, and multiplex the requests.

However now that the docker remote API supports websockets (it didn't at the time this issue was created), this might not be too hard.

2. Start an actual SSH daemon

Instead of hacking around the ssh agent, use an actual ssh connection from the client into the container. The docker client would either have an ssh client bundled in, or would invoke ssh into the remote container.
This would be a much larger scale change as it would replace the way attaching to containers is implemented. But it would also alleviate docker from having to handle that, and migrate to standard protocols.
This also has the potential to solve other issues (as mentioned here).

So ultimately a lot larger scale change, but might be a more proper solution.
Though realistically, because of the scale, I doubt this will happen.

@razic
razic commented Apr 7, 2015

@phemmer

I'm not sure I follow your point. How does that behavior help this issue?

Because the most common use case for this is people building images with dependencies that are hosted in private repositories that require SSH authentication.

You build the image on a machine that has a SSH key. That simple.

If I have an ssh key agent listening at /tmp/ssh-UPg6h0 on my local machine, and I have docker running on a remote machine, and call docker build, that local ssh key agent isn't accessible to the docker daemon.

I know. Who cares? I'll be running docker build on a machine that has access to the auth socket.

@razic
razic commented Apr 7, 2015

What I'm trying to say is.... docker-compose allows you to use the volume command against a swarm cluster, regardless of if the file is actually on the host or not!.

We should do the same thing for volume mounts on docker builds.

File is on system Action
Yes Mount
No None (actually it kind of tries to mount but creates a empty folder if the file/folder does not exist, you can verify this by running docker run -v /DOES_NOT_EXIST:/DOES_NOT_EXIST ubuntu ls -la /DOES_NOT_EXIST)
@razic
razic commented Apr 7, 2015

One of the concepts behind swarm is to make the multi-host model transparent.

It's good we're thinking about remote docker, but it shouldn't really matter.

We should just copy the behavior for volume mounting for docker build in the same exact way we do for docker run.

@razic
razic commented Apr 7, 2015

From https://github.com/docker/compose/blob/master/SWARM.md:

The primary thing stopping multi-container apps from working seamlessly on Swarm is getting them to talk to one another: enabling private communication between containers on different hosts hasn’t been solved in a non-hacky way.

Long-term, networking is getting overhauled in such a way that it’ll fit the multi-host model much better. For now, linked containers are automatically scheduled on the same host.

@phemmer I think people are probably thinking about a solution for the problem you described. The problem you are describing sounds like #7249 which is separate.

If we take my approach: just allowing volume mounting in docker build (regardless of if the file you're trying to mount is actually on the system, then we can close this issue. and start working on #7249 which would extend the behavior of this feature to working with remote docker daemons that don't have the local file.

@razic
razic commented Apr 8, 2015

@cpuguy83 Before I create a proposal, I was looking at #7133 and noticed it looks directly related.

Could you just add a few words here? Is #7133 actually related to my suggestion to fix this issue, which is to allow docker build to support volumes.

@cpuguy83
Contributor
cpuguy83 commented Apr 8, 2015

@razic It's in relation to the fact that VOLUME /foo actually creates a volume and mounts it into the container during build, which is generally undesirable.

I would also say a proposal based on using bind-mounts to get files into build containers is probably not going to fly.
See #6697

@razic
razic commented Apr 8, 2015

Running -v with docker build could have a differerent code execution path.
Instead of creating a volume and mounting it during build we can retain the
current behavior that volumes in dockerfiles don't get referenced. And
instead only act on -v when run using arguments to the CLI.

On Wednesday, April 8, 2015, Brian Goff notifications@github.com wrote:

@razic https://github.com/razic It's in relation to the fact that VOLUME
/foo actually creates a volume and mounts it into the container during
build, which is generally undesirable.

I would also say a proposal based on using bind-mounts to get files into
build containers is probably not going to fly.
See #6697 #6697

β€”
Reply to this email directly or view it on GitHub
#6396 (comment).

@razic
razic commented Apr 8, 2015

@cpuguy83 Thanks for clarification.

#6697 Also isn't going to fly since it's closed already and #10310 is practically a dupe of #6697.

@fullofcaffeine

+1, I just hit this today while trying to build an image for a Rails app that uses Bower to install the clientside dependencies. Happens that one of the dependencies points to git@github.com:angular/bower-angular-i18n.git and since git fails there, bower fails, and the image building fails, too.

I really like what vagrant does btw. With a single forward_agent config in the Vagrantfile, this is solved for vagrant guests. Could Docker implement something like this?

@fullofcaffeine

Also, as an additional note, this is happening while building the image. Does anyone know of any existing workarounds?

@fullofcaffeine

My workaround, was to generate a new RSA keypair, setup the pub key on github (add the fingerprint), and add the private key to the Docker image:

ADD keys/docker_rsa /srv/.ssh/id_rsa

I'd love to avoid this, but I guess this is acceptable for now. Any other suggestions appreciated!

@razic
razic commented Apr 9, 2015

I'm not sure who has killed more puppies. You for doing that, or Docker for not providing you with a better way as of yet.

In any case I'm going to submit a proposal this weekend probably. @cpuguy83 is right that people are at least thinking about this and discussing possible solutions. So at this point it's just a matter of us agreeing on something and getting someone to work on it. I'm totally down to work on it since it's actually one of my biggest gripes with Docker currently.

@fullofcaffeine

@razic It's a fairly common use-case, so thanks for looking into this, too. As for the workaround, it works. Possibly the key could be removed from the image after being used, after all, it's only used to get the application's code from github.

@razic
razic commented Apr 9, 2015

@fullofcaffeine I'm not 100% sure how Docker works internally but I think unless it's done in a single RUN command (which is impossible with your workaround) then the image's history maintains the SSH key.

@fullofcaffeine

@razic good point.

@pirelenito

As a work around this limitation, we've been playing around with the idea of downloading the private keys (from a local HTTP server), running a command that requires the keys and them deleting the keys afterwards.

Since we do all of this in a single RUN, nothing gets cached in the image. Here is how it looks in the Dockerfile:

RUN ONVAULT npm install --unsafe-perm

Our first implementation around this concept is available at https://github.com/dockito/vault

The only drawback is requiring the HTTP server running, so no Docker hub builds.

Let me know what you think :)

@atrauzzi atrauzzi referenced this issue in atrauzzi/bash-docker-laravel May 22, 2015
Open

Initial docker-compose setup #5

@dmitris
dmitris commented Jun 18, 2015

+1
would love to see this implemented, it would help to set up containers for development environment

@catuss-a

+1, just need forwarded ssh-agent with boot2dock

@jeffk
jeffk commented Jul 3, 2015

We've ended up doing a 3 step process to get around this limitation:

  1. build docker container without SSH-required dependencies, add'ing the source in the final step
  2. mount source via shared volume, plus SSH_AUTH_SOCK via shared volume and run the build step, writing the ssh-requiring output (say, github hosted ruby gems) back into the shared volume
  3. re-run docker build, which will re-trigger the source add, since the gems are now sitting in the source directory

The result is a docker image with dependencies pulled via SSH-auth that never had an SSH key in it.

@rcoup
rcoup commented Jul 9, 2015

I created a script to enable ssh agent forwarding for docker run in a boot2docker environment on OSX with minimal hassle. I know it doesn't solve the build issue, but might be useful for some:

https://gist.github.com/rcoup/53e8dee9f5ea27a51855

@jjekircp jjekircp referenced this issue in ros-infrastructure/ros_buildfarm Aug 10, 2015
Closed

Cannot use SSH URLs for source/release repositories in rosdistro repo #91

@omarabid

Does the Forward ssh key agent work with services like Amazon EC 2 Container service? It seems to me that this will require a specific software which may not available on all platforms or PaaS that you are using to deploy your containers.

A more generic, work-for-all, solution is required.

Currently, I'm using Environment variables. A bash script gets the private key (and known hosts) variable and prints it to id_rsa and known_hosts files. It works, but I have yet to evaluate the security implications of such a solution.

@rvowles
rvowles commented Sep 25, 2015

It is important to distinguish what works in run vs build. @whilp 's solution works wonderfully in run but does not work in build because you cannot access other docker's volumes during build. Hence why this ticket is still an aching, open sore.

@whilp
whilp commented Oct 1, 2015

@rvowles yep, agreed. I put something together to generate containers via a sequence of run/commit calls (ie, without Dockerfiles); that made sense for my particular use case, but generalized support (including build-time) for something like agent forwarding would be super helpful.

@shaunc
shaunc commented Oct 8, 2015

Are IPs for running containers included in the /etc/hosts during build? If so, one solution might be to start a container that served the keys, then curl to it during build.

@aidanhs
Contributor
aidanhs commented Oct 9, 2015

You may all be interested to know that I've blogged about a way to use your SSH agent during docker build - http://aidanhs.com/blog/post/2015-10-07-dockerfiles-reproducibility-trickery/#_streamlining_your_experience_using_an_ssh_agent

You just need to start a single container. Once started, SSH agent access should work flawlessly with only 3 additional lines in your Dockerfile - no more need to expose your keys to the container.

Some caveats: you need Docker >= 1.8, and it won't work on a Docker Hub automated build (obviously). Please also read the note on security! Feel free to raise issues in the sshagent github repository I link to in the post if you have any problems.

@benton
benton commented Oct 28, 2015

I have also solved this problem in a similar way to @aidanhs - by pulling the required secret over the local docker subnet, and then removing it before the filesystem snapshot occurs. A running container serves the secret, which is discovered by the client using broadcast UDP.
https://github.com/mdsol/docker-ssh-exec

@atrauzzi

Has there been any progress on making this possible? I'm unable to bind-mount the host's ~/.ssh directory because permissions and ownership get messed up.

Wouldn't this be solvable by allowing bind mounts to force specific uid/gid and permissions?

@cpuguy83
Contributor

@atrauzzi bind-mounts can't force uid/gid/permissions.
Can do this via FUSE (e.g. bindfs), but not with just normal bind mounts.

@atrauzzi

@cpuguy83 That really starts to take me down roads I don't want to have to deal with. Especially when I'm using a Windows-based host.

Is there no user friendly option here? I get the feeling like there's a problem here that's just being deferred.

@cpuguy83
Contributor

@atrauzzi Indeed, it's not an easy problem to solve in the immediate term (not seamlessly anyway).

@apeace
apeace commented Jan 6, 2016

+1 this is a big blocker for an otherwise simple Node.js app Dockerfile. I've worked on many Node apps, and I've rarely seen one that doesn't have a private Github repo as an NPM dependency.

@jakirkham

As a workaround @apeace, you could try to add them as git submodule(s) to your git repo. That way they are in the context and you can just add them during the build and if you want to be really clean delete or ignore the .git file in each one. In the docker build, they can just be installed using the local directory. If they need to be full fledged git repos for some reason, make sure the .git file is not present in the docker build and add .git/modules/<repo> as <path>/<repo>/.git. That will make sure they are normal repos as if they were cloned.

@apeace
apeace commented Jan 7, 2016

Thanks for that suggestion @jakirkham, but we've been using private repos as an NPM dependency for so long, I don't want to break the normal npm install workflow.

For now, we have a solution that works but is just icky. We have:

  • Created a Github user & team that has read-only access to the repos we use as NPM dependencies
  • Committed that user's private key to our repo where we have our Dockerfile
  • In the Dockerfile, instead of RUN npm install we do RUN GIT_SSH='/code/.docker/git_ssh.sh' npm install

Where git_ssh.sh is a script like this:

#!/bin/sh
ssh -o StrictHostKeyChecking=no -i /code/.docker/deploy_rsa "$@"

It works, but forwarding the ssh key agent would be so much nicer, and a lot less setup work!

@andreiRS
andreiRS commented Jan 8, 2016

πŸ‘
Can not believe that this feature request is still not implemented since here are a lot of use cases where people require access from private repos during build time.

@pietrushnic

I'm trying to build containers for various embedded system development environments, which require access to private repositories. Adding support for host ssh keys would be great feature. Most popular methods flying on SO and other pages are insecure and as long as there will be no support for this feature layers with private keys will spread around.

πŸ‘

@fullofcaffeine

πŸ‘ Been needing this forever.

@pirelenito

Hi @apeace, I don't know if you have seen it, but I've commented earlier about our workaround to this problem.

It is a combination of a script and a web server. What do you think https://github.com/dockito/vault ?

@apeace
apeace commented Jan 14, 2016

@pirelenito wouldn't that make the key still be available within a layer of the build? If that is the case, it is not worth it to us to add Dockito Valut to our build process--it seems just as 'jenky' to me as what we're doing now. I appreciate the suggestion!

@pirelenito

@apeace the ONVAULT script dowloads the keys, run your command and then immediately deletes the keys. Since this all happens in the same command, the final layer will not contain the key.

@benton
benton commented Jan 14, 2016

@apeace At Medidata, we're using a tiny tool we built called docker-ssh-exec. It leaves only the docker-ssh-exec binary in the resulting build image -- no secrets. And it requires only a one-word change to the Dockerfile, so it's very "low-footprint."

But if you really need to use a docker-native-only solution, there's now a built-in way to do this, as noted in the company blog post. Docker 1.9 allows you to use the --build-arg parameter to pass ephemeral values to the build process. You should be able to pass a private SSH key in as an ARG, write it to the filesystem, perform a git checkout, and then delete the key, all within the scope of one RUN directive. (this is what the docker-ssh-exec client does). This will make for an ugly Dockerfile, but should require no external tooling.

Hope this helps.

@pirelenito

@benton We have come up with a similar solution. :)

@apeace
apeace commented Jan 14, 2016

Thanks @pirelenito and @benton, I will check out all your suggestions!

@benton
benton commented Jan 14, 2016

EDIT: the following is NOT secure, in fact:

For the record, here's how you check out a private repo from Github without leaving your SSH key in the resulting image.

First, replace user/repo-name in the following Dockerfile with the path to your private repo (make sure you keep the git@github.com prefix so that ssh is used for checkout):

FROM ubuntu:latest

ARG SSH_KEY
ENV MY_REPO git@github.com:user/repo-name.git

RUN apt-get update && apt-get -y install openssh-client git-core &&\
    mkdir -p /root/.ssh && chmod 0700 /root/.ssh && \
    ssh-keyscan github.com >/root/.ssh/known_hosts

RUN echo "$SSH_KEY" >/root/.ssh/id_rsa &&\
    chmod 0600 /root/.ssh/id_rsa &&\
    git clone "${MY_REPO}" &&\
    rm -f /root/.ssh/id_rsa

Then build with the command

docker build --tag=sshtest --build-arg SSH_KEY="$(cat ~/.ssh/path-to-private.key)" .

passing the correct path to your private SSH key.

@jakirkham

^ with Docker 1.9

@ljrittle

@benton You might want to look closely at the output of docker inspect sshtest and docker history sshtest. I think that you will find that metadata in the final image has your secret even if it is not available inside the container context itself...

@benton
benton commented Jan 14, 2016

@ljrittle Good spotting. The key is indeed there if you use a VAR. I guess an external workaround is still required here.

Perhaps one reason that a native solution has not yet been developed is because several workarounds are in place. But I agree with most others here that a built-in solution would serve the users better, and fit Docker's "batteries-included" philosophy.

@jakirkham

From the docs...

Note: It is not recommended to use build-time variables for passing secrets like github keys, user credentials etc.

( https://docs.docker.com/engine/reference/builder/#arg )

@jcrombez

I don't think a path to a file apply to this, the note is about letting plain visible password / token in your console log.

@jakirkham

I don't follow @jcrombez. The example was to pass the ssh key as a variable via ARG. So, it does apply.

@jcrombez

In term of security risk this is very different :

docker build --tag=sshtest --build-arg SSH_KEY="$(cat ~/.ssh/path-to-private.key)" .

than this :

docker build --tag=sshtest --build-arg SSH_KEY="mykeyisthis" .

if someone find your terminal log, the consequence are not the same.
but i'm not a security expert, this might still be dangerous for some other reasons i'm not aware of.

@jakirkham

On the command line, I suppose.

However, as @ljrittle pointed out and @benton conceded, any way that you use --build-arg/ARG will be committed in the build. So inspecting it will reveal information about the key. Both leave state in the final docker container and both suffer the same vulnerability on that end. Hence, why docker recommends against doing this.

@GordonTheTurtle

USER POLL

The best way to get notified of updates is to use the Subscribe button on this page.

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@fletcher91
@benlemasurier
@dmuso
@probepark
@saada
@ianAndrewClark
@jakirkham
@galindro
@luisguilherme
@akurkin
@allardhoeve
@SevaUA
@sankethkatta
@kouk
@cliffxuan
@kotlas92
@taion

@GordonTheTurtle

USER POLL

The best way to get notified of updates is to use the Subscribe button on this page.

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@parknicker
@dursk
@adambiggs

@thaJeztah
Member

In term of security risk this is very different :

docker build --tag=sshtest --build-arg SSH_KEY="$(cat ~/.ssh/path-to-private.key)" .

apart from your bash history, they're exactly the same; there's many places where that information can end up.

For example, consider that API requests can be logged on the server;

Here's a daemon log for docker build --tag=sshtest --build-arg SSH_KEY="fooobar" .

DEBU[0090] Calling POST /v1.22/build
DEBU[0090] POST /v1.22/build?buildargs=%7B%22SSH_KEY%22%3A%22fooobar%22%7D&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&memory=0&memswap=0&rm=1&shmsize=0&t=sshtest&ulimits=null
DEBU[0090] [BUILDER] Cache miss: &{[/bin/sh -c #(nop) ARG SSH_KEY]}
DEBU[0090] container mounted via layerStore: /var/lib/docker/aufs/mnt/de3530a82a1a141d77c445959e4780a7e1f36ee65de3bf9e2994611513790b8c
DEBU[0090] container mounted via layerStore: /var/lib/docker/aufs/mnt/de3530a82a1a141d77c445959e4780a7e1f36ee65de3bf9e2994611513790b8c
DEBU[0090] Skipping excluded path: .wh..wh.aufs
DEBU[0090] Skipping excluded path: .wh..wh.orph
DEBU[0090] Applied tar sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef to 91f79150f57d6945351b21c9d5519809e2d1584fd6e29a75349b5f1fe257777e, size: 0
INFO[0090] Layer sha256:5f70bf18a086007016e948b04aed3b82103a36bea41755b6cddfaf10ace3c6ef cleaned up
@GordonTheTurtle

USER POLL

The best way to get notified of updates is to use the Subscribe button on this page.

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@cj2

@thaJeztah thaJeztah referenced this issue in docker/leeroy Jan 16, 2016
Open

Fix "user vote" for > 100 comments #43

@gitbisect

I am trying to containerize a simple ruby/rack application. The Gemfile references several private gems. The moment bundle install starts and tries to access the private repos, I start getting this error

Host key verification failed.
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

I was able to workaround it but not without exposing my private key. That won't do. Please enable ssh authentication forwarding.

@atipugin
atipugin commented Mar 5, 2016

+1 for ssh forwarding during builds. Can't use go get with private repos because of it ;(

@matanster

+1 for enabling this use case in a secure manner

@sowawa sowawa referenced this issue in nlf/dlite Mar 30, 2016
Closed

Can't forward SSH_AUTH_SOCK #156

@GordonTheTurtle

USER POLL

The best way to get notified of updates is to use the Subscribe button on this page.

Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.

The people listed below have upvoted this issue by leaving a +1 comment:

@lukad

@JustinOhms

Just reading through this very interesting discussion, I'm wondering if a simple solution might solve these issues. Off the top of my head i'm thinking, an option in the Dockerfile to just be able exclude/ignore specific internal directories/files when taking snapshots. How hard could that be?

i.e.

EXCLUDE .ssh

I'm thinking it would apply across all steps that follow, so if you placed it after FROM then you could add your keys as much as you like and build as normal and never need to worry about keys accidentally ending up in your image (granted you might need to add them at every step that requires them but you wouldn't have to worry about them ending up in an image)

@eliwjones
eliwjones commented May 5, 2016 edited

@benton's suggestion works fine, and the docker daemon will only log the id_rsa key if it is in debug mode.

An even cuter way to expose your key during build is:

# Dockerfile
ARG SSH_KEY
RUN eval `ssh-agent -s` > /dev/null \
    && echo "$SSH_KEY" | ssh-add - \
    && git clone git@github.com:private/repository.git

docker build -t my_tag --build-arg SSH_KEY="$(< ~/.ssh/id_rsa)" .

Ha, though it is indeed just sitting there if you look at docker inspect my_tag.. so not sure what the real value of boot-arg is, other than being slightly tidier than ENV.

And, if you have a password on the id_rsa key, I guess you could be a bad human and do:

# Dockerfile
ARG SSH_KEY
ARG SSH_PASS
RUN eval `ssh-agent -s` > /dev/null \
    && echo "echo $SSH_PASS" > /tmp/echo_ps && chmod 700 /tmp/echo_ps \
    && echo "$SSH_KEY" | SSH_ASKPASS=/tmp/echo_ps DISPLAY= ssh-add - \
    && git clone git@github.com:private/repository.git
    && rm /tmp/echo_ps

docker build -t my_tag --build-arg SSH_KEY="$(< ~/.ssh/id_rsa)" --build-arg SSH_PASS=<bad_idea> .

It, of course, is hard to rationalize that being even remotely a good idea.. but we're all human, I suppose.

Granted, all of the biggest reasons for doing this would seem to be for people doing "bundle install" or "go get" against private repositories during a build..

I'd say just vendor your dependencies and ADD the entire project.. but, sometimes things need to get done now.

@yordis
yordis commented Jun 22, 2016

@SvenDowideit @thaJeztah Is there any solution for this problem? I tried to follow the thread but between closing and open another threads and a lot of opinions I have no idea what Docker team will do or when.

@Freyert
Freyert commented Jun 25, 2016 edited

The best, but needs implementation?

Docker build uses ssh-agent within the build to proxy to your host's ssh and then use your keys without having to know them!

For anyone just learning about ssh-agent proxying: github to the rescue

@phemmer's original idea.

@yordis I don't think there's a "great" solution in the thread that's freely available yet.

This comment from docker/docker-py#980 seems to indicate that if you copy your ssh keys into your root user's key directory on your host system the daemon will use those keys. I am however mad novice in this regard so someone else may be able to clarify.


Ok, but not the best

Passing the key in with docker 1.8's build args.
Caveats.

Definitely a Bad Idea

A lot of people have also recommended in here adding the key temporarily to the build context and then quickly removing it. Sounds really dangerous because if the key creeps into one of the commits anyone who uses the container can access that key by checking out a particular commit.


Why hasn't this gone anywhere yet?

It needs a design proposal, this issue is cah- luttered and ideas are only vague at the moment. Actual implementation details are being lost in a haze of "what if we did x" and +1s. To get organized and get moving on this much needed feature, those having possible solutions should create a . . .

design proposal

and then reference this issue.

@benton
benton commented Jun 25, 2016

I have some news on this issue.

At DockerCon this past week, we were encouraged to bring our hardest questions to Docker's "Ask the Experts" pavilion, so I went over and had a short chat with a smart and friendly engineer with the encouraging title Solutions Architect. I gave him a short summary of this issue, which I hope I conveyed accurately, because he assured me that this can be done with only docker-compose! The details of what he was proposing involved a multi-stage build -- maybe to accumulate the dependencies in a different context than the final app build -- and seemed to involve using data volumes at build time.

Unfortunately, I'm not experienced with docker-compose, so I could not follow all the details, but he assured me that if I wrote to him with the exact problem, he would respond with a solution. So I wrote what I hope is a clear enough email, which includes a reference to this open GitHub issue. And I heard back from him this morning, with his reassurance that he will reply when he's come up with something.

I'm sure he's plenty busy, so I would not expect anything immediate, but I find this encouraging, to the extent that he's understood the problem, and is ready to attack it with only the docker-native toolset.

@WoZ
WoZ commented Jul 21, 2016

@benton I use the following config of docker-compose.yaml to do things described in this topic:

version: '2'
services:
  serviceName:
     volumes:
      - "${SSH_AUTH_SOCK}:/tmp/ssh-agent"
    environment:
      SSH_AUTH_SOCK: /tmp/ssh-agent

Make sure that ssh-agent started on host machine and knows about key (you may check it with ssh-add -L command).

Please note that you may need add

Host *
  StrictHostKeyChecking no

to container's .ssh/config.

@garcianavalon

Hi @WoZ! thanks for you answer, looks simple enough so I'll give it a try :)

I have a question though, how can you use this with automated builds in docker hub? As far as I now there is no way to use a compose file there :(

@rcoup
rcoup commented Jul 22, 2016 edited

@garcianavalon works well, but it's only for run, not build. Not yet working with Docker for Mac either, though it's on the todo list apparently.

Edit: docker/for-mac#410

@kienpham2000

We came up with 2 more workaround for our specific needs:

  1. Setup our own package mirror for npm, pypi, etc. behind our VPN, this way we don't need SSH.

  2. All the host machine we already have access to private repos, so we clone / download the private package locally to the host machine, run its package installation to download it, then using -v to map volume to docker, then build docker.

We are currently using option 2).

@tarikjn
tarikjn commented Sep 24, 2016 edited

As far as docker run, docker-ssh-agent-forward seems to provide an elegant solution and works across Docker for Mac/Linux.

It might still be a good idea to COPY the known_hosts file from the host instead of creating it in the container (less secure), seeing as ssh-agent does not seem to forward known hosts.

But the fundamental problem with pulling private dependencies during a docker run step is bypassing docker build cache, which can be very significant in term of build time.

One approach to go around this limitation is to md5/date your build dependency declarations (e.g. package.json), push the result to an image and reuse the same image if the file has not changed. Using the hash in the image name will allow caching multiple states. It would have to be combined with the pre-install image digest as well.

This should be more robust than @aidanhs's solution for build servers, although I still have to test it at scale.

@aidanhs
Contributor
aidanhs commented Sep 25, 2016

This should be more robust than @aidanhs's solution for build servers, although I still have to test it at scale.

My specific solution hasn't worked since 1.9.0 - it turned out that the feature introduced in 1.8.0 that I was relying on wasn't intentional and so it was removed.

Although the principle of my solution remains fine (it just requires you have a DNS server off your machine that a) your machine uses and b) you are able to add entries to appropriate locations), I can't really say I'd enthusiastically recommend it any more.

@tarikjn
tarikjn commented Sep 26, 2016 edited

Thank you for the extra info @aidanhs!

Some updates regarding my proposed solution: hashes don't actually need to combined as the hash of the base image just after adding the dependencies declaration file can simply be used. Moreover it is better to simply mount the known_host file as a volume, since ssh-agent can only be used at runtime anyway -- and more secure as it contains a list of all the hosts you connect to.

I implemented the complete solution for node/npm and it can be found here with detailed documentation and examples: https://github.com/iheartradio/docker-node

Of course, the principles can be extended for other frameworks.

@binarytemple-bet365

Same problem here, how does one build something, where that something requires SSH credentials in order to check and build a number of projects at build time, inside a docker container, without writing credentials to the image or a base image.

@jbiel
Contributor
jbiel commented Oct 14, 2016

We work around this by having a 2-step build process. A "build" image containing the source/keys/build dependencies is created. Once that's built it's run in order to extract the build results into a tarfile which is later added to a "deploy" image. The build image is then removed and all that's published is the "deploy" image. This has a nice side effect of keeping container/layer sizes down.

@tarikjn
tarikjn commented Oct 14, 2016 edited

@binarytemple-bet365 see https://github.com/iheartradio/docker-node for an end-to-end example doing exactly that. I use more than two steps as I use an ssh service container, pre-install (base image until before installing private dependencies), install (container state after runtime installation of private dependencies) and post-install (adds commands that you had after installation of private dependencies) to optimize speed and separation of concern.

@Sodki
Sodki commented Oct 16, 2016

Check out Rocker, it's a clean solution.

@binarytemple-bet365

@Sodki I took your advice. Yes, rocker is a clean and well thought out solution. More's the shame the docker team wouldn't just take that project under their wing and deprecate docker build. Thank you.

@solars
solars commented Dec 5, 2016

Still no better way? :(

@kienpham2000

Have anyone tried this new squash thing? #22641 Might be the docker native solution we are looking for. Going to try it now and report back to see how it goes.

@yordis
yordis commented Dec 14, 2016 edited

After 2+ years this is not fix yet 😞 Please Docker team do something about it

@kienpham2000
kienpham2000 commented Dec 15, 2016 edited

Looks like the new --squash option in 1.13 works for me:
http://g.recordit.co/oSuMulfelK.gif

I build it with: docker build -t report-server --squash --build-arg SSH_KEY="$(cat ~/.ssh/github_private_key)" .

So when I do docker history or docker inspect, the key doesn't show.

My Dockerfile looks like this:

FROM node:6.9.2-alpine

ARG SSH_KEY

RUN apk add --update git openssh-client && rm -rf /tmp/* /var/cache/apk/* &&\
  mkdir -p /root/.ssh && chmod 0700 /root/.ssh && \
  ssh-keyscan github.com > /root/.ssh/known_hosts

RUN echo "$SSH_KEY" > /root/.ssh/id_rsa &&\
  chmod 0600 /root/.ssh/id_rsa

COPY package.json .

RUN npm install
RUN rm -f /root/.ssh/id_rsa

# Bundle app source
COPY . .

EXPOSE 3000

CMD ["npm","start"]
@ryanschwartz

@kienpham2000, your screenshot looks like it still contains the keys - could you please check the output of docker history with the --no-trunc flag and report back here on wether or not the private keys are displayed in docker history?

@kienpham2000

@ryanschwartz you are right, the --no-trunc shows the whole damn thing, this doesn't fly.

@whitecolor

@kienpham2000
Another thing they introduced in 1.13 release is:

Build secrets
β€’ enables build time secrets using β€”build-secret flag
β€’ creates tmpfs during build, and exposes secrets to the
build containers, to be used during build.
β€’ #28079

Maybe this could work?

@justincormack
Member
@omarabid

So a year later: No, this is a bad idea. You SHOULD NOT do that. There are various other solutions. For example, Github can provide access tokens. You can use them in configuration files/environment variables with less risk as you can specify which actions are allowed for each token.

@yordis
yordis commented Dec 15, 2016

The solution is to implement SSH forwarding. Like Vagrant does it for example.

Can somebody explain me why is so complicate it to implement that ?

@jbiel
Contributor
jbiel commented Dec 15, 2016

@omarabid - are you replying to your original proposal of using environment variables to pass private keys to be used within the Dockerfile? There is no question, that is a bad security practice.

As to your suggestion to use access tokens, they would end up stored in a layer and can be just as dangerous to leave laying around as an SSH key. Even if it only has read only access, most people wouldn't want others to have read only access to their repos. Also, frequent revocation/rotation/distribution would need to occur; this is a little easier to handle for each developer etc. rather than with "master" access tokens.

The build secrets solution mentioned a few comments back looks like it's a step in the right direction, but the ability to use an SSH agent is best. Maybe one could use an SSH agent in combination with build secrets, I'm not sure.

It's natural for developers/CI systems to use an SSH agent during git/build operations. This is much more secure than having a plaintext, password-less private key that must be revoked/replaced en masse across a variety of systems. Also, with SSH agents there's no possibility of the private key data getting committed to an image. At worst an environment variable/SSH_AUTH_SOCK remnant will be left behind in the image.

@edmundluong edmundluong referenced this issue in laradock/laradock Dec 15, 2016
Open

Idea: Share SSH Config with Docker #359

@kienpham2000

I got this latest workaround without showing secret key content or using extra 3rd party docker tool (hopefully the secret vault during built PR will get merged in soon).

I'm using the aws cli to download the shared private key from S3 into the host current repo. This key is encrypted at rest using KMS. Once the key is downloaded, Dockerfile will just COPY that key during the build process and remove it afterward, content doesn't show at docker inspect or docker history --no-trunc

Download the github private key from S3 first to the host machine:

# build.sh
s3_key="s3://my-company/shared-github-private-key"
aws configure set s3.signature_version s3v4
aws s3 cp $s3_key id_rsa --region us-west-2 && chmod 0600 id_rsa

docker build -t app_name .

Dockerfile looks like this:

FROM node:6.9.2-alpine

ENV id_rsa /root/.ssh/id_rsa
ENV app_dir /usr/src/app

RUN mkdir -p $app_dir
RUN apk add --update git openssh-client && rm -rf /tmp/* /var/cache/apk/* && mkdir -p /root/.ssh && ssh-keyscan github.com > /root/.ssh/known_hosts

WORKDIR $app_dir

COPY package.json .
COPY id_rsa $id_rsa
RUN npm install && npm install -g gulp && rm -rf $id_rsa

COPY . $app_dir
RUN rm -rf $app_dir/id_rsa

CMD ["start"]

ENTRYPOINT ["npm"]
@diegocsandrim
diegocsandrim commented Jan 5, 2017 edited

@kienpham2000, why this solution would not keep key into image layer? The actions of copy and remove the key are being done in separated commands, so there is a layer that should have the key.
Our team was using yours solution until yesterday, but we find out an improved solution:

  • We generate a pre-sign URL to access the key with aws s3 cli, and limit the access for about 5 minutes, we save this pre-sign URL into a file in repo directory, then in dockerfile we add it to the image.
  • In dockerfile we have a RUN command that do all these steps: use the pre-sing URL to get the ssh key, run npm install, and remove the ssh key.
    By doing this in one single command the ssh key would not be stored in any layer, but the pre-sign URL will be stored, and this is not a problem because the URL will not be valid after 5 minutes.

The build script looks like:

# build.sh
aws s3 presign s3://my_bucket/my_key --expires-in 300 > ./pre_sign_url
docker build -t my-service .

Dockerfile looks like this:

FROM node

COPY . .

RUN eval "$(ssh-agent -s)" && \
    wget -i ./pre_sign_url -q -O - > ./my_key && \
    chmod 700 ./my_key && \
    ssh-add ./my_key && \
    ssh -o StrictHostKeyChecking=no git@github.com || true && \
    npm install --production && \
    rm ./my_key && \
    rm -rf ~/.ssh/*

ENTRYPOINT ["npm", "run"]

CMD ["start"]
@kienpham2000

@diegocsandrim thank you for pointing that out, I really like your solution, going to update our stuffs here. Thanks for sharing!

@ovolynets ovolynets referenced this issue in zalando-stups/taupage Feb 2, 2017
Open

Update docker to 1.12 #366

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment