Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Forward ssh key agent into container #6396

Open
phemmer opened this issue Jun 13, 2014 · 188 comments

Comments

@phemmer
Copy link
Contributor

commented Jun 13, 2014

It would be nice to be able to forward an ssh key agent into a container during a run or build.
Frequently we need to build source code which exists in a private repository where access is controlled by ssh key.

Adding the key file into the container is a bad idea as:

  1. You've just lost control of your ssh key
  2. Your key might need to be unlocked via passphrase
  3. Your key might not be in a file at all, and only accessible through the key agent.

You could do something like:

# docker run -t -i -v "$SSH_AUTH_SOCK:/tmp/ssh_auth_sock" -e "SSH_AUTH_SOCK=/tmp/ssh_auth_sock" fedora ssh-add -l
2048 82:58:b6:82:c8:89:da:45:ea:9a:1a:13:9c:c3:f9:52 phemmer@whistler (RSA)

But:

  1. This only works for docker run, not build.
  2. This only works if the docker daemon is running on the same host as the client.

 

The ideal solution is to have the client forward the key agent socket just like ssh can.
However the difficulty in this is that it would require the remote API build and attach calls to support proxying an arbitrary number of socket streams. Just doing a single 2-way stream wouldn't be sufficient as the ssh key agent is a unix domain socket, and it can have multiple simultaneous connections.

@SvenDowideit

This comment has been minimized.

Copy link
Contributor

commented Jun 17, 2014

I wonder if #6075 will give you what you need

@phemmer

This comment has been minimized.

Copy link
Contributor Author

commented Jun 17, 2014

A secret container might make it a little bit safer, but all the points mentioned still stand.

@slmingol

This comment has been minimized.

Copy link

commented Jul 31, 2014

+1 I would find this capability useful as well. In particular when building containers that require software from private git repos, for example. I'd rather not have to share a repo key into the container, and instead would like to be able to have the "docker build ..." use some other method for gaining access to the unlocked SSH keys, perhaps through a running ssh-agent.

@jbiel

This comment has been minimized.

Copy link
Contributor

commented Aug 2, 2014

+1. I'm just starting to get my feet wet with Docker and this was the first barrier that I hit. I spent a while trying to use VOLUME to mount the auth sock before I realized that docker can't/won't mount a host volume during a build.

I don't want copies of a password-less SSH key lying around and the mechanics of copying one into a container then deleting it during the build feels wrong. I do work within EC2 and don't even feel good about copying my private keys up there (password-less or not.)

My use case is building an erlang project with rebar. Sure enough, I could clone the first repo and ADD it to the image with a Dockerfile, but that doesn't work with private dependencies that the project has. I guess I could just build the project on the host machine and ADD the result to the new Docker image, but I'd like to build it in the sandbox that is Docker.

Here are some other folks that have the same use-case: https://twitter.com/damncabbage/status/453347012184784896

Please, embrace SSH_AUTH_SOCK, it is very useful.

Thanks

Edit: Now that I know more about how Docker works (FS layers), it's impossible to do what I described in regards to ADDing an SSH key during a build and deleting it later. The key will still exist in some of the FS layers.

@arunthampi

This comment has been minimized.

Copy link

commented Aug 20, 2014

+1, being able to use SSH_AUTH_SOCK will be super useful!

@razic

This comment has been minimized.

Copy link

commented Sep 3, 2014

I use SSH keys to authenticate with Github, whether it's a private repository or a public one.

This means my git clone commands looks like: git clone git@github.com:razic/my-repo.git.

I can volume mount my host ~/.ssh directory into my containers during a docker run and ssh is all good. I cannot however mount my ~/.ssh during a docker build.

@bruce

This comment has been minimized.

Copy link

commented Sep 13, 2014

👍 for ssh forwarding during builds.

@puliaiev

This comment has been minimized.

Copy link

commented Sep 28, 2014

As I understand this is wrong way. Right way is create docker image in dev machine, and than copy it to docker server.

@slmingol

This comment has been minimized.

Copy link

commented Sep 28, 2014

@SevaUA - no that's not correct. This request is due to a limitation when doing docker build.... You cannot export a variable into this stage like you can when doing a docker run .... The run command allows variables to be export into the docker container while running, whereas the build does not allow this. This limitation is partially intentional based on how dockerd works when building containers. But there are ways around this and the usecase that is described is a valid one. So this request is attempting to get this capability implemented in build, in some fashion.

@kanzure

This comment has been minimized.

Copy link

commented Oct 17, 2014

I like the idea of #6697 (secret store/vault), and that might work for this once it's merged in. But if that doesn't work out, an alternative is to do man-in-the-middle transparent proxying ssh stuff outside of the docker daemon, intercepting docker daemon traffic (not internally). Alternatively, all git+ssh requests could be to some locally-defined host that transparently proxies to github or whatever you ultimately need to end up at.

@phemmer

This comment has been minimized.

Copy link
Contributor Author

commented Oct 17, 2014

That idea has already been raised (see comment 2). It does not solve the issue.

@nodefourtytwo

This comment has been minimized.

Copy link

commented Oct 29, 2014

+1 for ssh forwarding during builds.

@goloroden

This comment has been minimized.

Copy link

commented Nov 12, 2014

+1 on SSH agent forwarding on docker build

@sabind

This comment has been minimized.

Copy link

commented Nov 19, 2014

+1 for ssh forwarding during build for the likes of npm install or similar.

@paulodeon

This comment has been minimized.

Copy link

commented Nov 23, 2014

Has anyone got ssh forwarding working during run on OSX? I've put a question up here: http://stackoverflow.com/questions/27036936/using-ssh-agent-with-docker/27044586?noredirect=1#comment42633776_27044586 it looks like it's not possible with OSX...

@thomasdavis

This comment has been minimized.

Copy link

commented Dec 5, 2014

+1 =(

@kevzettler

This comment has been minimized.

Copy link

commented Dec 26, 2014

Just hit this roadblock as well. Trying to run npm install pointed at a private repo. setup looks like:
host -> vagrant -> docker can ssh-agent forward host -> vagrant -! docker

@patra04

This comment has been minimized.

Copy link

commented Jan 2, 2015

+1
Just hit this while trying to figure out how to get ssh agent working during 'docker build'.

@igreg

This comment has been minimized.

Copy link

commented Jan 16, 2015

+1 same as the previous guys. Seems the best solution to this issue when needing to access one or more private git repositories (think bundle install and npm install for instance) when building the Docker image.

@tonivdv

This comment has been minimized.

Copy link

commented Jan 29, 2015

I can volume mount my host ~/.ssh directory into my containers during a docker run and ssh is all good.

@razic Can you share how you get that working? Because when I tried that before it did complain about "Bad owner or permissions"

Unless you make sure that all containers run with a specific user or permissions which allows you to do that?

@jfarrell

This comment has been minimized.

Copy link

commented Jan 29, 2015

+1 to SSH_AUTH_SOCK

@md5

This comment has been minimized.

Copy link
Contributor

commented Jan 29, 2015

@tonivdv have a look at the docker run command in the initial comment on this issue. It bind mounts the path referred to by SSH_AUTH_SOCK to /tmp/ssh_auth_sock inside the container, then sets the SSH_AUTH_SOCK in the container to that path.

@KyleJamesWalker

This comment has been minimized.

Copy link

commented Jan 29, 2015

@md5 I assume @razic and @tonivdv are talking about mounting like this: -v ~/.ssh:/root/.ssh:ro, but when you do this the .ssh files aren't owned by root and therefore fail the security checks.

@tonivdv

This comment has been minimized.

Copy link

commented Jan 29, 2015

@KyleJamesWalker yup that's what I understand from @razic and which was one of my attempts some time ago, so when I read @razic was able to make it work, I was wondering how :)

@KyleJamesWalker

This comment has been minimized.

Copy link

commented Jan 29, 2015

@tonivdv I'd also love to know if it's possible, I couldn't find anything when I last tried though.

@yordis

This comment has been minimized.

Copy link

commented Jul 30, 2017

I am not that involve anymore with Docker, but, how it's possible that this issue exists for such of long time. I am not trying to call out but rather than understand what is the effort needed for fix this because back when I was dealing with this it seems a really really common issue for any company that pulls private packages like ruby gems from a private repo.

Does the Moby care about this issue? Why it have to be so hard for something that seems not that big a deal I guess.

Has been almost 3 years 😢

@villlem

This comment has been minimized.

Copy link

commented Jul 31, 2017

@yordis docker builder was frozen for a year or two. Docker team stated that builder is good enough and that they focus their efforts elsewhere. But this is gone and there were two changes to builder since. Squash and mustli stage builds. So buildtime secrets may be on their way.

@mtibben

This comment has been minimized.

Copy link

commented Jul 31, 2017

For runtime forwarding of ssh-agent, I would recommend https://github.com/uber-common/docker-ssh-agent-forward

@thaJeztah

This comment has been minimized.

Copy link
Member

commented Jul 31, 2017

Why it have to be so hard for something that seems not that big a deal I guess.

@yordis reading the top description of this issue, implementing this is far from trivial; having said that, if someone has a technical design proposal for this, feel free to open an issue or PR for discussion. Also note that for the build part, a buildkit project was started for future enhancements to the builder; https://github.com/moby/buildkit

@yordis

This comment has been minimized.

Copy link

commented Jul 31, 2017

@thaJeztah I wish I could have the skills required but I don't.

@villlem do you know any roadmap from the Docker team?

@thaJeztah

This comment has been minimized.

Copy link
Member

commented Jul 31, 2017

Weekly reports for the builder can be found here; https://github.com/moby/moby/tree/master/reports/builder build time secrets is still listed in the latest report, but could use help

@alowde-catapult

This comment has been minimized.

Copy link

commented Aug 4, 2017

We're using @diegocsandrim's solution but with an intermediate encryption step to avoid leaving an unencrypted SSH key in S3.

This extra step means that the key can't be recovered from the Docker image (the URL to download it expires after five minutes) and can't be recovered from AWS (as it's encrypted with a rotating password known only to the docker image).

In build.sh:

BUCKET_NAME=my_bucket
KEY_FILE=my_unencrypted_key
openssl rand -base64 -out passfile 64
openssl enc -aes-256-cbc -salt -in $KEY_FILE -kfile passfile | aws s3 cp - s3://$BUCKET_NAME/$(hostname).enc_key
aws s3 presign s3://$BUCKET_NAME/$(hostname).enc_key --expires-in 300 > ./pre_sign_url
docker build -t my_service

And in the Dockerfile:

COPY . .

RUN eval "$(ssh-agent -s)" && \
    wget -i ./pre_sign_url -q -O - | openssl enc -aes-256-cbc -d -kfile passfile > ./my_key && \
    chmod 700 ./my_key && \
    ssh-add ./my_key && \
    mkdir /root/.ssh && \
    chmod 0700 /root/.ssh && \
    ssh-keyscan github.com > /root/.ssh/known_hosts && \
    [commands that require SSH access to Github] && \
    rm ./my_key && \
    rm ./passfile && \
    rm -rf /root/.ssh/
@dragon788

This comment has been minimized.

Copy link
Contributor

commented Oct 13, 2017

if you are using docker run you should mount your .ssh with --mount type=bind,source="${HOME}/.ssh/",target="/root/.ssh/",readonly. The readonly is the magic, it masks the normal permissions and ssh basically sees 0600 permissions which it is happy with. You can also play with -u root:$(id -u $USER) to have the root user in the container write any files it creates with the same group as your user so hopefully you can at least read them if not fully write them without having to chmod/chown.

@benton

This comment has been minimized.

Copy link

commented Oct 13, 2017

Finally.

I believe this problem can now be solved using just docker build, by using multi-stage builds.
Just COPY or ADD the SSH key or other secret wherever you need it, and use it in RUN statements however you like.

Then, use a second FROM statement to start a new filesystem, and COPY --from=builder to import some subset of directories that don't include the secret.

(I have not actually tried this yet, but if the feature works as described...)

@Sodki

This comment has been minimized.

Copy link

commented Oct 13, 2017

@benton multi-stage builds work as described, we use it. It's by far the best option for many different problems, including this one.

@benton

This comment has been minimized.

Copy link

commented Oct 13, 2017

I have verified the following technique:

  1. Pass the location of a private key as a Build Argument, such as GITHUB_SSH_KEY, to the first stage of a multi-stage build
  2. Use ADD or COPY to write the key to wherever it's needed for authentication. Note that if the key location is a local filesystem path (and not a URL), it must not be in the .dockerignore file, or the COPY directive will not work. This has implications for the final image, as you'll see in step 4...
  3. Use the key as needed. In the example below, the key is used to authenticate to GitHub. This also works for Ruby's bundler and private Gem repositories. Depending on how much of the codebase you need to include at this point, you may end up adding the key again as a side-effect of using COPY . or ADD ..
  4. REMOVE THE KEY IF NECESSARY. If the key location is a local filesystem path (and not a URL), then it is likely that it was added alongside the codebase when you did ADD . or COPY . This is probably precisely the directory that's going to be copied into the final runtime image, so you probably also want to include a RUN rm -vf ${GITHUB_SSH_KEY} statement once you're done using the key.
  5. Once your app is completely built into its WORKDIR, start the second build stage with a new FROM statement, indicating your desired runtime image. Install any necessary runtime dependencies, and then COPY --from=builder against the WORKDIR from the first stage.

Here's an example Dockerfile that demonstrates the above technique. Providing a GITHUB_SSH_KEY Build Argument will test GitHub authentication when building, but the key data will not be included in the final runtime image. The GITHUB_SSH_KEY can be a filesystem path (within the Docker build dir) or a URL that serves the key data, but the key itself must not be encrypted in this example.

########################################################################
# BUILD STAGE 1 - Start with the same image that will be used at runtime
FROM ubuntu:latest as builder

# ssh is used to test GitHub access
RUN apt-get update && apt-get -y install ssh

# The GITHUB_SSH_KEY Build Argument must be a path or URL
# If it's a path, it MUST be in the docker build dir, and NOT in .dockerignore!
ARG GITHUB_SSH_KEY=/path/to/.ssh/key

  # Set up root user SSH access for GitHub
ADD ${GITHUB_SSH_KEY} /root/.ssh/id_rsa

# Add the full application codebase dir, minus the .dockerignore contents...
# WARNING! - if the GITHUB_SSH_KEY is a file and not a URL, it will be added!
COPY . /app
WORKDIR /app

# Build app dependencies that require SSH access here (bundle install, etc.)
# Test SSH access (this returns false even when successful, but prints results)
RUN ssh -o StrictHostKeyChecking=no -vT git@github.com 2>&1 | grep -i auth

# Finally, remove the $GITHUB_SSH_KEY if it was a file, so it's not in /app!
# It can also be removed from /root/.ssh/id_rsa, but you're probably not going
# to COPY that directory into the runtime image.
RUN rm -vf ${GITHUB_SSH_KEY} /root/.ssh/id*

########################################################################
# BUILD STAGE 2 - copy the compiled app dir into a fresh runtime image
FROM ubuntu:latest as runtime
COPY --from=builder /app /app

It might be safer to pass the key data itself in the GITHUB_SSH_KEY Build Argument, rather than the location of the key data. This would prevent accidental inclusion of the key data if it's stored in a local file and then added with COPY .. However, this would require using echo and shell redirection to write the data to the filesystem, which might not work in all base images. Use whichever technique is safest and most feasible for your set of base images.

@omarabid

This comment has been minimized.

Copy link

commented Oct 15, 2017

@jbiel Another year, and the solution I found is to use something like Vault.

@z-vr

This comment has been minimized.

Copy link

commented Oct 24, 2017

Here's a link with 2 methods (squash and intermediate container described earlier by @benton)

@kommunicate

This comment has been minimized.

Copy link

commented Nov 10, 2017

I'm just adding a note to say that neither of the current approaches will work if you have a passphrase on the ssh key you're using since the agent will prompt you for the passphrase whenever you perform the action that requires access. I don't think there's a way around this without passing around the key phrase (which is undesirable for a number of reasons)

@kinnalru

This comment has been minimized.

Copy link

commented Nov 30, 2017

Solving.
Create bash script(~/bin/docker-compose or like):

#!/bin/bash

trap 'kill $(jobs -p)' EXIT
socat TCP-LISTEN:56789,reuseaddr,fork UNIX-CLIENT:${SSH_AUTH_SOCK} &

/usr/bin/docker-compose $@

And in Dockerfile using socat:

...
ENV SSH_AUTH_SOCK /tmp/auth.sock
...
  && apk add --no-cache socat openssh \
  && /bin/sh -c "socat -v UNIX-LISTEN:${SSH_AUTH_SOCK},unlink-early,mode=777,fork TCP:172.22.1.11:56789 &> /dev/null &" \
  && bundle install \
...
or any other ssh commands will works

Then run docker-compose build

@tnguyen14

This comment has been minimized.

Copy link

commented Dec 1, 2017

@benton why do you use RUN rm -vf ${GITHUB_SSH_KEY} /root/.ssh/id*? Shouldn't it just be RUN rm -vf /root/.ssh/id*? Or maybe I misunderstood the intent here.

@Jokero

This comment has been minimized.

Copy link

commented Dec 1, 2017

@benton And also it's not safe to do:

RUN ssh -o StrictHostKeyChecking=no -vT git@github.com 2>&1

You have to check fingerprint

@zeayes

This comment has been minimized.

Copy link

commented Aug 13, 2018

I sovled this problem by this way

ARGS USERNAME
ARGS PASSWORD
RUN git config --global url."https://${USERNAME}:${PASSWORD}@github.com".insteadOf "ssh://git@github.com"

then build with

docker build --build-arg USERNAME=use --build-arg PASSWORD=pwd. -t service

But at first, your private git server must support username:password clone repo.

@kinnalru

This comment has been minimized.

Copy link

commented Aug 13, 2018

@zeayes RUN command stored in container history. So your password is visible to other.

@thaJeztah

This comment has been minimized.

Copy link
Member

commented Aug 13, 2018

Correct; when using --build-arg / ARG, those values will show up in the build history. It is possible to use this technique if you use multi-stage builds and trust the host on which images are built (i.e., no untrusted user has access to the local build history), and intermediate build-stages are not pushed to a registry.

For example, in the following example, USERNAME and PASSWORD will only occur in the history for the first stage ("builder"), but won't be in the history for the final stage;

FROM something AS builder
ARG USERNAME
ARG PASSWORD
RUN something that uses $USERNAME and $PASSWORD

FROM something AS finalstage
COPY --from= builder /the/build-artefacts /usr/bin/something

If only the final image (produced by "finalstage") is pushed to a registry, then USERNAME and PASSWORD won't be in that image.

However, in the local build cache history, those variables will still be there (and stored on disk in plain text).

The next generation builder (using BuildKit) will have more features, also related to passing build-time secrets; it's available in Docker 18.06 as an experimental feature, but will come out of experimental in a future release, and more features will be added (I'd have to check if secrets/credentials are already possible in the current version)

@thaJeztah thaJeztah closed this Aug 13, 2018

@thaJeztah thaJeztah reopened this Aug 13, 2018

@zeayes

This comment has been minimized.

Copy link

commented Aug 13, 2018

@kinnalru @thaJeztah thx, i use multi-stage builds, but the password can be seen in the cache container's history, thx!

@thaJeztah

This comment has been minimized.

Copy link
Member

commented Aug 13, 2018

@zeayes Oh! I see I did a copy/paste error; last stage must not use FROM builder ... Here's a full example; https://gist.github.com/thaJeztah/af1c1e3da76d7ad6ce2abab891506e50

@cowlicks

This comment has been minimized.

Copy link

commented Oct 21, 2018

This comment by @kinnalru is the right way to do this #6396 (comment)

With this method, docker never handles your private keys. And it also works today, without any new features being added.

It took me a while to figure it out, so here is a more clear, and improved explanation. I changed @kinnalru code to use --network=host and localhost, so you don't need to know your ip address. (gist here)

This is docker_with_host_ssh.sh, it wraps docker and forwards SSH_AUTH_SOCK to a port on localhost:

#!/usr/bin/env bash

# ensure the processes get killed when we're done
trap 'kill $(jobs -p)' EXIT

# create a connection from port 56789 to the unix socket SSH_AUTH_SOCK (which is used by ssh-agent)
socat TCP-LISTEN:56789,reuseaddr,fork UNIX-CLIENT:${SSH_AUTH_SOCK} &
# Run docker
# Pass it all the command line args ($@)
# set the network to "host" so docker can talk to localhost
docker $@ --network='host'

In the Dockerfile we connect over localhost to the hosts ssh-agent:

FROM python:3-stretch

COPY . /app
WORKDIR /app

RUN mkdir -p /tmp

# install socat and ssh to talk to the host ssh-agent
RUN  apt-get update && apt-get install git socat openssh-client \
  # create variable called SSH_AUTH_SOCK, ssh will use this automatically
  && export SSH_AUTH_SOCK=/tmp/auth.sock \
  # make SSH_AUTH_SOCK useful by connecting it to hosts ssh-agent over localhost:56789
  && /bin/sh -c "socat UNIX-LISTEN:${SSH_AUTH_SOCK},unlink-early,mode=777,fork TCP:localhost:56789 &" \
  # stuff I needed my ssh keys for
  && mkdir -p ~/.ssh \
  && ssh-keyscan gitlab.com > ~/.ssh/known_hosts \
  && pip install -r requirements.txt

Then you can build your image by invoking the script:

$ docker_with_host_ssh.sh build -f ../docker/Dockerfile .
@thaJeztah

This comment has been minimized.

Copy link
Member

commented Oct 22, 2018

@cowlicks you may be interested in this pull request, which adds support for docker build --ssh to forward the SSH agent during build; docker/cli#1419. The Dockerfile syntax is still not in the official specs, but you can use a syntax=.. directive in your Dockerfile to use a frontend that supports it (see the example/instructions in the pull request).

That pull request will be part of the upcoming 18.09 release.

@kalenp

This comment has been minimized.

Copy link

commented Feb 15, 2019

It looks like this is now available in the 18.09 release. Since this thread comes up before the release notes and medium post, I'll cross-post here.

Release Notes:
https://docs.docker.com/develop/develop-images/build_enhancements/#using-ssh-to-access-private-data-in-builds

Medium Post:
https://medium.com/@tonistiigi/build-secrets-and-ssh-forwarding-in-docker-18-09-ae8161d066

Very exciting.

@rmoriz

This comment has been minimized.

Copy link
Contributor

commented Feb 15, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
You can’t perform that action at this time.