New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Secrets: write-up best practices, do's and don'ts, roadmap #13490

Open
thaJeztah opened this Issue May 26, 2015 · 197 comments

Comments

Projects
None yet
@thaJeztah
Member

thaJeztah commented May 26, 2015

Handling secrets (passwords, keys and related) in Docker is a recurring topic. Many pull-requests have been 'hijacked' by people wanting to (mis)use a specific feature for handling secrets.

So far, we only discourage people to use those features, because they're either provenly insecure, or not designed for handling secrets, hence "possibly" insecure. We don't offer them real alternatives, at least, not for all situations and if, then without a practical example.

I just think "secrets" is something that has been left lingering for too long. This results in users (mis)using features that are not designed for this (with the side effect that discussions get polluted with feature requests in this area) and making them jump through hoops just to be able to work with secrets.

Features / hacks that are (mis)used for secrets

This list is probably incomplete, but worth a mention

  • Environment Variables. Probably the most used, because it's part of the "12 factor app". Environment variables are discouraged, because they are;
    • Accessible by any proces in the container, thus easily "leaked"
    • Preserved in intermediate layers of an image, and visible in docker inspect
    • Shared with any container linked to the container
  • Build-time environment variables (#9176, #15182). The build-time environment variables were not designed to handle secrets. By lack of other options, people are planning to use them for this. To prevent giving the impression that they are suitable for secrets, it's been decided to deliberately not encrypt those variables in the process.
  • Mark .. Squash / Flatten layers. (#332, #12198, #4232, #9591). Squashing layers will remove the intermediate layers from the final image, however, secrets used in those intermediate layers will still end up in the build cache.
  • Volumes. IIRC some people were able to use the fact that volumes are re-created for each build-step, allowing them to store secrets. I'm not sure this actually works, and can't find the reference to how that's done.
  • Manually building containers. Skip using a Dockerfile and manually build a container, commiting the results to an image
  • Custom Hacks. For example, hosting secrets on a server, curl-ing the secrets and remove them afterwards, all in a single layer. (also see https://github.com/dockito/vault)

So, what's needed?

  • Add documentation on "do's" and "don'ts" when dealing with secrets; @diogomonica made some excellent points in #9176 (comment)
  • Describe the officially "endorsed" / approved way to handle secrets, if possible, using the current features
  • Provide roadmap / design for officially handling secrets, we may want to make this pluggable, so that we don't have to re-invent the wheel and use existing offerings in this area, for example, Vault, Keywiz, Sneaker

The above should be written / designed with both build-time and run-time secrets in mind

@calavera created a quick-and-dirty proof-of-concept on how the new Volume-Drivers (#13161) could be used for this; https://github.com/calavera/docker-volume-keywhiz-fs

Note: Environment variables are used as the de-facto standard to pass configuration/settings, including secrets to containers. This includes official images on Docker Hub (e.g. MySQL, WordPress, PostgreSQL). These images should adopt the new 'best practices' when written/implemented.

In good tradition, here are some older proposals for handling secrets;

  • "Add private files support" #5836
  • "Add secret store" #6075
  • "Continuation of the docker secret storage feature" #6697
  • "Proposal: The Docker Vault" #10310
@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah May 26, 2015

Member

ping @ewindisch @diogomonica @NathanMcCauley This is just a quick write-up. Feel free to modify/update the description if you think that's nescessary :)

Member

thaJeztah commented May 26, 2015

ping @ewindisch @diogomonica @NathanMcCauley This is just a quick write-up. Feel free to modify/update the description if you think that's nescessary :)

@dreamcat4

This comment has been minimized.

Show comment
Hide comment

This is useful infos:

hashicorp/vault#165

As is this:

hashicorp/vault#164

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah May 26, 2015

Member

@dreamcat4 there are some plans to implement a generic "secrets API", which would allow you to use either Vault, or Keywiz or you-name-it with Docker, but all in the same way. It's just an early thought, so it will require additional research.

Member

thaJeztah commented May 26, 2015

@dreamcat4 there are some plans to implement a generic "secrets API", which would allow you to use either Vault, or Keywiz or you-name-it with Docker, but all in the same way. It's just an early thought, so it will require additional research.

@dreamcat4

This comment has been minimized.

Show comment
Hide comment
@dreamcat4

dreamcat4 May 27, 2015

@thaJeztah Yep Sorry I don't want to detract from those efforts / discussion in any way. I am more thinking maybe it's a useful exercise also (as part of that longer process and while we are waiting) to see how far we can get right now. Then it shows up more clearly to others the limits and deficiencies in current process. What underlying is missing and needed the most to be added to improve the secrets.

Also it's worth considering about the different situations of run-time secrets VS build-time secrets. For which there is also an area overlap area.

And perhaps also (for docker) we may also be worth to consider limitations (pros/cons) between solutions that provide a mechanism to handle the secrets "in-memory". As opposed to a more heavily file-based secrets methods or network based ones e.g. local secrets server. Which are the current hacks on the table (until proper secrets API). This can help us to understand some of the unique value (for example of stronger security) added by a docker secrets API which could not otherwise be achieved by using hacks on top of the current docker feature set. However I am not a security expert. So I cannot really comment on those things with such a great certainty.

@thaJeztah Yep Sorry I don't want to detract from those efforts / discussion in any way. I am more thinking maybe it's a useful exercise also (as part of that longer process and while we are waiting) to see how far we can get right now. Then it shows up more clearly to others the limits and deficiencies in current process. What underlying is missing and needed the most to be added to improve the secrets.

Also it's worth considering about the different situations of run-time secrets VS build-time secrets. For which there is also an area overlap area.

And perhaps also (for docker) we may also be worth to consider limitations (pros/cons) between solutions that provide a mechanism to handle the secrets "in-memory". As opposed to a more heavily file-based secrets methods or network based ones e.g. local secrets server. Which are the current hacks on the table (until proper secrets API). This can help us to understand some of the unique value (for example of stronger security) added by a docker secrets API which could not otherwise be achieved by using hacks on top of the current docker feature set. However I am not a security expert. So I cannot really comment on those things with such a great certainty.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah May 27, 2015

Member

@dreamcat4 yes, you're right; for the short term, those links are indeed useful.

Also it's worth considering about the different situations of run-time secrets VS build-time secrets. For which there is also an area overlap area.

Thanks! I think I had that in my original description, must have gotten lost in the process. I will add a bullet

However I am not a security expert.

Neither am I, that's why I "pinged" the security maintainers; IMO, this should be something written by them 😇

Member

thaJeztah commented May 27, 2015

@dreamcat4 yes, you're right; for the short term, those links are indeed useful.

Also it's worth considering about the different situations of run-time secrets VS build-time secrets. For which there is also an area overlap area.

Thanks! I think I had that in my original description, must have gotten lost in the process. I will add a bullet

However I am not a security expert.

Neither am I, that's why I "pinged" the security maintainers; IMO, this should be something written by them 😇

@diogomonica

This comment has been minimized.

Show comment
Hide comment
@diogomonica

diogomonica May 27, 2015

Contributor

@thaJeztah great summary. I'll try to poke at this whenever I find some time.

Contributor

diogomonica commented May 27, 2015

@thaJeztah great summary. I'll try to poke at this whenever I find some time.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jun 7, 2015

Member

@diogomonica although not directly related, there a long open feature request for forwarding SSH key agent during build; #6396 given the number of comments, it would be good to give that some thinking too. (If even to take a decision on it whether or not it can/should be implemented)

Member

thaJeztah commented Jun 7, 2015

@diogomonica although not directly related, there a long open feature request for forwarding SSH key agent during build; #6396 given the number of comments, it would be good to give that some thinking too. (If even to take a decision on it whether or not it can/should be implemented)

@ebuchman

This comment has been minimized.

Show comment
Hide comment
@ebuchman

ebuchman Jun 13, 2015

Assuming you could mount volumes as user other than root (I know it's impossible, but humour me), would that be a favourable approach to getting secrets into containers?

If so, I'd advocate for an alternative to -v host_dir:image_dir that expects the use of a data-only container and might look like -vc host_dir:image_dir (ie. volume-copy) wherein the contents of host_dir are copied into the image_dir volume on the data-only container.

We could then emphasize a secure-data-only containers paradigm and allow those volumes to be encrypted

Assuming you could mount volumes as user other than root (I know it's impossible, but humour me), would that be a favourable approach to getting secrets into containers?

If so, I'd advocate for an alternative to -v host_dir:image_dir that expects the use of a data-only container and might look like -vc host_dir:image_dir (ie. volume-copy) wherein the contents of host_dir are copied into the image_dir volume on the data-only container.

We could then emphasize a secure-data-only containers paradigm and allow those volumes to be encrypted

@kepkin

This comment has been minimized.

Show comment
Hide comment
@kepkin

kepkin Nov 13, 2015

I've recently read a good article about that from @jrslv where he propose to build a special docker image with secrets just to build your app, and than build another image for distribution using results from running build image.

So you have two Dockerfiles:

  • Dockerfile.build (here you simply copy all your secrets)
  • Dockerfile.dist (this one you will push to registry)

Now we can build our distribution like that:

# !/bin/sh
docker build -t hello-world-build -f Dockerfile.build .
docker run hello-world-build >build.tar.gz 
docker build -t hello-world -f Dockerfile.dist ^

Your secrets are safe, as you never push hello-world-build image.

I recommend to read @jrslv article for more details http://resources.codeship.com/ebooks/continuous-integration-continuous-delivery-with-docker

kepkin commented Nov 13, 2015

I've recently read a good article about that from @jrslv where he propose to build a special docker image with secrets just to build your app, and than build another image for distribution using results from running build image.

So you have two Dockerfiles:

  • Dockerfile.build (here you simply copy all your secrets)
  • Dockerfile.dist (this one you will push to registry)

Now we can build our distribution like that:

# !/bin/sh
docker build -t hello-world-build -f Dockerfile.build .
docker run hello-world-build >build.tar.gz 
docker build -t hello-world -f Dockerfile.dist ^

Your secrets are safe, as you never push hello-world-build image.

I recommend to read @jrslv article for more details http://resources.codeship.com/ebooks/continuous-integration-continuous-delivery-with-docker

@lamroger

This comment has been minimized.

Show comment
Hide comment
@lamroger

lamroger Nov 13, 2015

Thanks for sharing @kepkin !
Just finished reading the article. Really concise!

I like the idea of exporting the files and loading them in through a separate Dockerfile. It feels like squashing without the "intermediate layers being in the build cache" issue.

However, I'm nervous that it'll complicate development and might require a third Dockerfile for simplicity.

Thanks for sharing @kepkin !
Just finished reading the article. Really concise!

I like the idea of exporting the files and loading them in through a separate Dockerfile. It feels like squashing without the "intermediate layers being in the build cache" issue.

However, I'm nervous that it'll complicate development and might require a third Dockerfile for simplicity.

@TomasTomecek

This comment has been minimized.

Show comment
Hide comment
@TomasTomecek

TomasTomecek Nov 24, 2015

Contributor

@kepkin no offense but that doesn't make any sense. Secrets are definitely not safe, since they are in the tarball and the tarball is being ADDed to production image -- even if you remove the tarball, without squashing, it will leak in some layer.

Contributor

TomasTomecek commented Nov 24, 2015

@kepkin no offense but that doesn't make any sense. Secrets are definitely not safe, since they are in the tarball and the tarball is being ADDed to production image -- even if you remove the tarball, without squashing, it will leak in some layer.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Nov 24, 2015

Member

@TomasTomecek if I understand the example correctly, the tarball is not the image-layers, but just the binary that was built inside the build container. See for example; https://github.com/docker-library/hello-world/blob/master/update.sh (no secrets involved here, but just a simple example of a build container)

Member

thaJeztah commented Nov 24, 2015

@TomasTomecek if I understand the example correctly, the tarball is not the image-layers, but just the binary that was built inside the build container. See for example; https://github.com/docker-library/hello-world/blob/master/update.sh (no secrets involved here, but just a simple example of a build container)

@kepkin

This comment has been minimized.

Show comment
Hide comment
@kepkin

kepkin Nov 25, 2015

@TomasTomecek I'm talking about secrets for building Docker image. For instance, you need to pass ssh key to checkout source code from your private GitHub repository. And the tarball contains only build artifacts but doesn't contain GitHub key.

kepkin commented Nov 25, 2015

@TomasTomecek I'm talking about secrets for building Docker image. For instance, you need to pass ssh key to checkout source code from your private GitHub repository. And the tarball contains only build artifacts but doesn't contain GitHub key.

@TomasTomecek

This comment has been minimized.

Show comment
Hide comment
@TomasTomecek

TomasTomecek Nov 25, 2015

Contributor

@kepkin right, now I read your post again and can see it. Sorry about that. Unfortunately it doesn't solve the issue when you need secrets during deployment/building the distribution image (e.g. fetching artifacts and authenticating with artifact service). But it's definitely a good solution for separation between build process and release process.

Contributor

TomasTomecek commented Nov 25, 2015

@kepkin right, now I read your post again and can see it. Sorry about that. Unfortunately it doesn't solve the issue when you need secrets during deployment/building the distribution image (e.g. fetching artifacts and authenticating with artifact service). But it's definitely a good solution for separation between build process and release process.

@kepkin

This comment has been minimized.

Show comment
Hide comment
@kepkin

kepkin Nov 25, 2015

@TomasTomecek that's exactly how I fetch artifacts actually.

In Docker.build image I download some binary dependencies from Amazon S3 image which require AWS key & secret. After retrieving and building, I create a tarball with everything I need.

kepkin commented Nov 25, 2015

@TomasTomecek that's exactly how I fetch artifacts actually.

In Docker.build image I download some binary dependencies from Amazon S3 image which require AWS key & secret. After retrieving and building, I create a tarball with everything I need.

@jacobdr

This comment has been minimized.

Show comment
Hide comment
@jacobdr

jacobdr Nov 27, 2015

Is there a canonical "best practices" article -- the "Do"s as apprised to the "Don'ts" -- that y'all would recommend reading?

jacobdr commented Nov 27, 2015

Is there a canonical "best practices" article -- the "Do"s as apprised to the "Don'ts" -- that y'all would recommend reading?

@afeld

This comment has been minimized.

Show comment
Hide comment
@afeld

afeld Nov 27, 2015

Contributor

Worth noting (for anyone else like me that is stumbling upon this) that Docker Compose has support for an env_file option.

https://docs.docker.com/compose/compose-file/#env-file

Contributor

afeld commented Nov 27, 2015

Worth noting (for anyone else like me that is stumbling upon this) that Docker Compose has support for an env_file option.

https://docs.docker.com/compose/compose-file/#env-file

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Nov 27, 2015

Member

@afeld docker itself has this feature as well, see http://docs.docker.com/engine/reference/commandline/run/#set-environment-variables-e-env-env-file but those env-vars will still show up in the same places, so don't make a difference w.r.t "leaking"

Member

thaJeztah commented Nov 27, 2015

@afeld docker itself has this feature as well, see http://docs.docker.com/engine/reference/commandline/run/#set-environment-variables-e-env-env-file but those env-vars will still show up in the same places, so don't make a difference w.r.t "leaking"

@sebastian-philipp

This comment has been minimized.

Show comment
Hide comment
@hmalphettes

This comment has been minimized.

Show comment
Hide comment
@hmalphettes

hmalphettes Dec 5, 2015

@kepkin this is how I pass an ssh-key to docker build:

# serve the ssh private key once over http on a private port.
which ncat
if [ "$?" = "0" ]; then
  ncat -lp 8000 < $HOME/.ssh/id_rsa &
else
  nc -lp 8000 < $HOME/.ssh/id_rsa &
fi
nc_pid=$!
docker build --no-cache -t bob/app .
kill $nc_pid || true

and inside the Dockerfile where 172.17.0.1 is the docker gateway IP:

RUN \
  mkdir -p /root/.ssh && \
  curl -s http://172.17.0.1:8000 > /root/.ssh/id_rsa && \
  chmod 600 /root/.ssh/id_rsa && chmod 700 /root/.ssh && \
  ssh-keyscan -t rsa,dsa github.com > ~/.ssh/known_hosts && \
  git clone --depth 1 --single-branch --branch prod git@github.bob/app.git . && \
  npm i --production && \
  ... && \
  rm -rf /root/.npm /root/.node-gyp /root/.ssh

If someone has something simpler let us know.

@kepkin this is how I pass an ssh-key to docker build:

# serve the ssh private key once over http on a private port.
which ncat
if [ "$?" = "0" ]; then
  ncat -lp 8000 < $HOME/.ssh/id_rsa &
else
  nc -lp 8000 < $HOME/.ssh/id_rsa &
fi
nc_pid=$!
docker build --no-cache -t bob/app .
kill $nc_pid || true

and inside the Dockerfile where 172.17.0.1 is the docker gateway IP:

RUN \
  mkdir -p /root/.ssh && \
  curl -s http://172.17.0.1:8000 > /root/.ssh/id_rsa && \
  chmod 600 /root/.ssh/id_rsa && chmod 700 /root/.ssh && \
  ssh-keyscan -t rsa,dsa github.com > ~/.ssh/known_hosts && \
  git clone --depth 1 --single-branch --branch prod git@github.bob/app.git . && \
  npm i --production && \
  ... && \
  rm -rf /root/.npm /root/.node-gyp /root/.ssh

If someone has something simpler let us know.

@jdmarshall

This comment has been minimized.

Show comment
Hide comment
@jdmarshall

jdmarshall Jan 8, 2016

So what's the current status of this?

All summer there were long conversational chains, indicating just how widespread this concern is. This was filed in May, and it's still open. For instance, how would I set the password for Postgres?

So what's the current status of this?

All summer there were long conversational chains, indicating just how widespread this concern is. This was filed in May, and it's still open. For instance, how would I set the password for Postgres?

@blaggacao

This comment has been minimized.

Show comment
Hide comment
@blaggacao

blaggacao Jan 10, 2016

@thaJeztah What can be done to move this forward? I guess many eyes throughout different downstream projects are on this issue... ej. rancher/rancher#1269

@thaJeztah What can be done to move this forward? I guess many eyes throughout different downstream projects are on this issue... ej. rancher/rancher#1269

@demarant

This comment has been minimized.

Show comment
Hide comment
@demarant

demarant Jan 10, 2016

I guess what is being done here is kept secret :D

I guess what is being done here is kept secret :D

@pvanderlinden

This comment has been minimized.

Show comment
Hide comment
@pvanderlinden

pvanderlinden Dec 15, 2017

Python projects pull in their requirements on install time, not on built time. In our case, from a private pypi/conda repository (password protected)

Python projects pull in their requirements on install time, not on built time. In our case, from a private pypi/conda repository (password protected)

@Vanuan

This comment has been minimized.

Show comment
Hide comment
@Vanuan

Vanuan Dec 15, 2017

So? Make installation a part of your build process and then copy installed packages to a fresh image.

You just need to make sure that your build image and your production image are based on the same Python base image.

Vanuan commented Dec 15, 2017

So? Make installation a part of your build process and then copy installed packages to a fresh image.

You just need to make sure that your build image and your production image are based on the same Python base image.

@pvanderlinden

This comment has been minimized.

Show comment
Hide comment
@pvanderlinden

pvanderlinden Dec 15, 2017

You can indeed just copy everything into a new image indeed. That removes the whole point of a Dockerfile though. Why have a Dockerfile if the only thing you can use it for is just to copy a set of directories?

You can indeed just copy everything into a new image indeed. That removes the whole point of a Dockerfile though. Why have a Dockerfile if the only thing you can use it for is just to copy a set of directories?

@OJezu

This comment has been minimized.

Show comment
Hide comment
@OJezu

OJezu Dec 15, 2017

So, I can't have a simple flow in which I just run docker build . wherever - either on dev machine or CI, but I have to depend on CI to build packages. Why even bother with docker then? I can write a travis file, or configure the flow in bamboo.

OJezu commented Dec 15, 2017

So, I can't have a simple flow in which I just run docker build . wherever - either on dev machine or CI, but I have to depend on CI to build packages. Why even bother with docker then? I can write a travis file, or configure the flow in bamboo.

@oppianmatt

This comment has been minimized.

Show comment
Hide comment
@oppianmatt

oppianmatt Dec 15, 2017

Can't you just pip install requirements.txt in your first stage build with secrets available to pull from your private repositories. Then the next stage build just copies the site-packages from the first stage.

Why have a Dockerfile if the only thing you can use it for is just to copy a set of directories?

Why not? Using a Dockerfile to build is consistent.

Can't you just pip install requirements.txt in your first stage build with secrets available to pull from your private repositories. Then the next stage build just copies the site-packages from the first stage.

Why have a Dockerfile if the only thing you can use it for is just to copy a set of directories?

Why not? Using a Dockerfile to build is consistent.

@Vanuan

This comment has been minimized.

Show comment
Hide comment
@Vanuan

Vanuan Dec 15, 2017

Image specification is more then just a bunch of zipped files. There are environment variables, command line arguments, volumes, etc

Read the Dockerfile reference:
https://docs.docker.com/engine/reference/builder/

It looks like you've been primarily focusing on the RUN instruction, thinking that Dockerfile is a replacement for you Makefile. It is not. Dockerfile is meant for one thing only: build an image out of some source material. What that source material would be - a binary downloaded over http or a git repository - doesn't matter. Docker doesn't need to be your CI system, even though you can use it as one under certain conditions.

Vanuan commented Dec 15, 2017

Image specification is more then just a bunch of zipped files. There are environment variables, command line arguments, volumes, etc

Read the Dockerfile reference:
https://docs.docker.com/engine/reference/builder/

It looks like you've been primarily focusing on the RUN instruction, thinking that Dockerfile is a replacement for you Makefile. It is not. Dockerfile is meant for one thing only: build an image out of some source material. What that source material would be - a binary downloaded over http or a git repository - doesn't matter. Docker doesn't need to be your CI system, even though you can use it as one under certain conditions.

@Vanuan

This comment has been minimized.

Show comment
Hide comment
@Vanuan

Vanuan Dec 15, 2017

I can write a travis file, or configure the flow in bamboo.

If you can get the result of your build process and then get to run it in another environment, without images and containers, then for sure you do not need to bother with docker. Why would you?

Vanuan commented Dec 15, 2017

I can write a travis file, or configure the flow in bamboo.

If you can get the result of your build process and then get to run it in another environment, without images and containers, then for sure you do not need to bother with docker. Why would you?

@OJezu

This comment has been minimized.

Show comment
Hide comment
@OJezu

OJezu Dec 15, 2017

Separate, strictly controlled environment that gets guaranteed resets between builds, but only if build steps have changed. Ability to run it anywhere, not only on CI servers (like with Travis), tying build instructions with code, which I think is good if build changes for different code branches (e.g change version of run environment only on one branch). Possibility to run build container on developer machines, allowing shipping entire environment to developers who otherwise have no idea how to upgrade their own system, but will be able to build application with their changes locally with same environment as everyone else.

If I didn't want all of that, I would stick to lxc + ansible, no need for docker then.

OJezu commented Dec 15, 2017

Separate, strictly controlled environment that gets guaranteed resets between builds, but only if build steps have changed. Ability to run it anywhere, not only on CI servers (like with Travis), tying build instructions with code, which I think is good if build changes for different code branches (e.g change version of run environment only on one branch). Possibility to run build container on developer machines, allowing shipping entire environment to developers who otherwise have no idea how to upgrade their own system, but will be able to build application with their changes locally with same environment as everyone else.

If I didn't want all of that, I would stick to lxc + ansible, no need for docker then.

@Vanuan

This comment has been minimized.

Show comment
Hide comment
@Vanuan

Vanuan Dec 15, 2017

You don't need docker build for that.

Vanuan commented Dec 15, 2017

You don't need docker build for that.

@NikolausDemmel

This comment has been minimized.

Show comment
Hide comment
@NikolausDemmel

NikolausDemmel Dec 15, 2017

You don't need docker build for that.

Of course you can provide also a custom Makefile or a build_image.sh script for every project instead of a single self-sufficient Dockerfile, but that has multiple disadvantages:

  • Cross platform compatibility: With providing a Dockerfile, I know that any system that can run docker build will be able to build the image. With providing a custom Makefile or build_image.sh, I have to manually ensure that those work on all platforms that I want to support.
  • Known interface for users: If you know docker, you know some of the behavior of docker build for any project, even without looking at the Dockerfile (e.g. with respect to caching, etc...). If I have a custom Makefile or build_image.sh, for each project, I need to first find out what are the commands to build, to clean, where and in what form the result is, if there is some caching and in what form, ...

You don't need docker build for that.

Of course you can provide also a custom Makefile or a build_image.sh script for every project instead of a single self-sufficient Dockerfile, but that has multiple disadvantages:

  • Cross platform compatibility: With providing a Dockerfile, I know that any system that can run docker build will be able to build the image. With providing a custom Makefile or build_image.sh, I have to manually ensure that those work on all platforms that I want to support.
  • Known interface for users: If you know docker, you know some of the behavior of docker build for any project, even without looking at the Dockerfile (e.g. with respect to caching, etc...). If I have a custom Makefile or build_image.sh, for each project, I need to first find out what are the commands to build, to clean, where and in what form the result is, if there is some caching and in what form, ...
@Vanuan

This comment has been minimized.

Show comment
Hide comment
@Vanuan

Vanuan Dec 15, 2017

Oh, Dockerfile is far from self-sufficient. Especially for development environment.
Consider this:

  • most developers don't know all the different options of docker build, but almost everybody knows how to run bash scripts
  • docker build depends on the context directory. So unless you're willing to wait for gigabytes of data (your source code with dependencies) to travel from one location to another for every single source line change, you won't use it for development.
  • unless you build EVERYTHING from scratch, you have a dependency on the docker registry
  • it's likely that you will depend on OS repositories (whether you use Debian or Alpine-based images), unless you boot up a container straight to the statically-built binary
  • unless you commit everything to git, you will have some project-level dependencies, be it npm, python package index, rubygems or anything else. So you'll depend on some external package registry or its mirror
  • as most people noticed here you'll depend on some secret package location for your private dependencies which you can't publish to public repository, so you'll depend on that
  • secrets provisioning is required to access that secure location, so you'll depend on some system that will distribute secrets to developers
  • in addition to Dockefile, you'll need docker-compose.yml, and it's not cross-platform: you still depend on forward-/backslash differences.

Cross platform compatibility: With providing a Dockerfile, I know that any system that can run docker build will be able to build the image.

Dockerfile doesn't ensure cross-platform compatibility. You still have to provide multiple Dockerfiles for multiple platforms. "Can run docker build" doesn't mean "Uses Linux" anymore. Docker also supports Windows native images. You still have to use Cygwin + Linux VM if you want to run something specifically targeted for Linux machines on a Windows host.

Oh, and I didn't even mention x86 vs ARM...

Known interface for users: If you know docker, you know some of the behavior of docker build for any project, even without looking at the Dockerfile

Unless you don't. Everybody knows how to run a bash script without parameters or a single make command. Few people know how to correctly specify all the different command line options for docker build, docker run or docker-compose. It's inevitable that you'll have some wrapper bash or cmd script.


With all due respect to what the Docker folks did, I think you're asking too much. I'm afraid the Mobyproject doesn't have such a broad scope as to support all the development workflows imaginable.

Vanuan commented Dec 15, 2017

Oh, Dockerfile is far from self-sufficient. Especially for development environment.
Consider this:

  • most developers don't know all the different options of docker build, but almost everybody knows how to run bash scripts
  • docker build depends on the context directory. So unless you're willing to wait for gigabytes of data (your source code with dependencies) to travel from one location to another for every single source line change, you won't use it for development.
  • unless you build EVERYTHING from scratch, you have a dependency on the docker registry
  • it's likely that you will depend on OS repositories (whether you use Debian or Alpine-based images), unless you boot up a container straight to the statically-built binary
  • unless you commit everything to git, you will have some project-level dependencies, be it npm, python package index, rubygems or anything else. So you'll depend on some external package registry or its mirror
  • as most people noticed here you'll depend on some secret package location for your private dependencies which you can't publish to public repository, so you'll depend on that
  • secrets provisioning is required to access that secure location, so you'll depend on some system that will distribute secrets to developers
  • in addition to Dockefile, you'll need docker-compose.yml, and it's not cross-platform: you still depend on forward-/backslash differences.

Cross platform compatibility: With providing a Dockerfile, I know that any system that can run docker build will be able to build the image.

Dockerfile doesn't ensure cross-platform compatibility. You still have to provide multiple Dockerfiles for multiple platforms. "Can run docker build" doesn't mean "Uses Linux" anymore. Docker also supports Windows native images. You still have to use Cygwin + Linux VM if you want to run something specifically targeted for Linux machines on a Windows host.

Oh, and I didn't even mention x86 vs ARM...

Known interface for users: If you know docker, you know some of the behavior of docker build for any project, even without looking at the Dockerfile

Unless you don't. Everybody knows how to run a bash script without parameters or a single make command. Few people know how to correctly specify all the different command line options for docker build, docker run or docker-compose. It's inevitable that you'll have some wrapper bash or cmd script.


With all due respect to what the Docker folks did, I think you're asking too much. I'm afraid the Mobyproject doesn't have such a broad scope as to support all the development workflows imaginable.

@NikolausDemmel

This comment has been minimized.

Show comment
Hide comment
@NikolausDemmel

NikolausDemmel Dec 15, 2017

I'm not going to refute all your points individually. Firstly, you can of course always find situations where the "single Dockerfile" approach does not work at all. However, I would argue, that for almost all of your points that you raised (which all are valid and relevant), the "custom script or makefile" approach is either just as bad or worse. Just as an example one point:

most developers don't know all the different options of docker build, but almost everybody knows how to run bash scripts

If I am involved in 10 projects, and they all use a Dockerfile, I need to learn about docker only once, but with your suggestion I need to learn 10 totally different build scripts. How do I wipe the cache and start from scratch for project Foo's build_image.sh again? It's not clear. If building the image is done with docker build, it is clear (ofc I need to know how docker works, but I also need to do that for using the image that comes out of build_image.sh).

Overall, I guess the point that me and other's are trying to make is that for /many/ scenarios the "single Dockerfile" approach seems to work really nicely for folks (which is a reason for docker being so popular), in particular in the open source world where usually all resources are accessible without secrets. But if you try to apply the same pattern that you have come to love in a context where part of your resources need credentials to access, the approach breaks down. There have been a number of suggestions (and implementations) of technologically not too complex ways to make it work, but nothing has become of it over a long time (this has been layed many times above). Hence the frustration.

I appreciate that people are putting effort into this, for example with the linked proposal in #33343. My post is about motivating what some people what and why they keep coming back asking for it here.

I'm not going to refute all your points individually. Firstly, you can of course always find situations where the "single Dockerfile" approach does not work at all. However, I would argue, that for almost all of your points that you raised (which all are valid and relevant), the "custom script or makefile" approach is either just as bad or worse. Just as an example one point:

most developers don't know all the different options of docker build, but almost everybody knows how to run bash scripts

If I am involved in 10 projects, and they all use a Dockerfile, I need to learn about docker only once, but with your suggestion I need to learn 10 totally different build scripts. How do I wipe the cache and start from scratch for project Foo's build_image.sh again? It's not clear. If building the image is done with docker build, it is clear (ofc I need to know how docker works, but I also need to do that for using the image that comes out of build_image.sh).

Overall, I guess the point that me and other's are trying to make is that for /many/ scenarios the "single Dockerfile" approach seems to work really nicely for folks (which is a reason for docker being so popular), in particular in the open source world where usually all resources are accessible without secrets. But if you try to apply the same pattern that you have come to love in a context where part of your resources need credentials to access, the approach breaks down. There have been a number of suggestions (and implementations) of technologically not too complex ways to make it work, but nothing has become of it over a long time (this has been layed many times above). Hence the frustration.

I appreciate that people are putting effort into this, for example with the linked proposal in #33343. My post is about motivating what some people what and why they keep coming back asking for it here.

@NikolausDemmel

This comment has been minimized.

Show comment
Hide comment
@NikolausDemmel

NikolausDemmel Dec 15, 2017

With all due respect to what the Docker folks did, I think you're asking too much. I'm afraid the Mobyproject doesn't have such a broad scope as to support all the development workflows imaginable.

It seems to me that what most people are asking for here is nothing of the sort, but only for a simple way to use secrets in docker build in a way that is not less secure than using them in your custom build_image.sh. One way that would satisfy this need seems to be build time mounts. They have downsides, there are probably better ways, but what is being asked is not about covering every possible corner case.

With all due respect to what the Docker folks did, I think you're asking too much. I'm afraid the Mobyproject doesn't have such a broad scope as to support all the development workflows imaginable.

It seems to me that what most people are asking for here is nothing of the sort, but only for a simple way to use secrets in docker build in a way that is not less secure than using them in your custom build_image.sh. One way that would satisfy this need seems to be build time mounts. They have downsides, there are probably better ways, but what is being asked is not about covering every possible corner case.

@Vanuan

This comment has been minimized.

Show comment
Hide comment
@Vanuan

Vanuan Dec 15, 2017

I'm sorry, but each person in this ticket has a slightly different use case. Those are corner cases and require different solutions.

  1. I want to run production images on development machines. Use docker registry
  2. I want a distributed CI system, so that each developer has a reproducible build. Use docker run to build your project, use docker prune to clean up
  3. I want to build docker images so that I can distribute them. Use a dedicated CI server where you can run multistage builds.

Vanuan commented Dec 15, 2017

I'm sorry, but each person in this ticket has a slightly different use case. Those are corner cases and require different solutions.

  1. I want to run production images on development machines. Use docker registry
  2. I want a distributed CI system, so that each developer has a reproducible build. Use docker run to build your project, use docker prune to clean up
  3. I want to build docker images so that I can distribute them. Use a dedicated CI server where you can run multistage builds.
@OJezu

This comment has been minimized.

Show comment
Hide comment
@OJezu

OJezu Dec 18, 2017

@Vanuan, so I guess your approach is basically: don't use docker build, for anything more than basic environment. This is an issue created to change that. "You have to do it differently" IS the problem, not the solution.

People who push the issue want to have simpler and more straightforward approaches with docker images, not having to hack around docker limitations.

OJezu commented Dec 18, 2017

@Vanuan, so I guess your approach is basically: don't use docker build, for anything more than basic environment. This is an issue created to change that. "You have to do it differently" IS the problem, not the solution.

People who push the issue want to have simpler and more straightforward approaches with docker images, not having to hack around docker limitations.

@mumoshu

This comment has been minimized.

Show comment
Hide comment
@mumoshu

mumoshu Mar 13, 2018

For anyone interested: I had tried to exploit "masked-by-default" build-args like FTP_PROXY to build contexts. It is safe in regard to the fact that docker-build doesn't expose those masked args to image metadata nor image layers.

#36443 was an attempt to expand it to a build-arg named like SECRET so that we can encourage users to use it as a simple work-around to the secret management problem.

However, the work has been rejected reasonably, as the masked nature of those build-args aren't guaranteed in the future.

My best bet after that is to follow @AkihiroSuda's advice, use docker build --network or a tool like habitus to store/pass secrets via a temporary tcp server only visible build contexts live within single docker daemon, at broadest.

mumoshu commented Mar 13, 2018

For anyone interested: I had tried to exploit "masked-by-default" build-args like FTP_PROXY to build contexts. It is safe in regard to the fact that docker-build doesn't expose those masked args to image metadata nor image layers.

#36443 was an attempt to expand it to a build-arg named like SECRET so that we can encourage users to use it as a simple work-around to the secret management problem.

However, the work has been rejected reasonably, as the masked nature of those build-args aren't guaranteed in the future.

My best bet after that is to follow @AkihiroSuda's advice, use docker build --network or a tool like habitus to store/pass secrets via a temporary tcp server only visible build contexts live within single docker daemon, at broadest.

@darmbrust

This comment has been minimized.

Show comment
Hide comment
@darmbrust

darmbrust Jul 9, 2018

Commenting partially, so I get notification 5 years from now, when Docker finally decides to give us a tiny step in the direction of proper credential management.... and also, to give an outline of the hack I'm using at the moment, to help others, or to get holes poked in it that I'm unaware of.

In following @mumoshu issue, I finally got the hint of using the predefined-args for build secrets.

So, essentially, I can use docker-compose, with a mapping like this:

  myProject:
    build:
      context: ../myProject/
      args: 
        - HTTPS_PROXY=${NEXUS_USERNAME}
        - NO_PROXY=${NEXUS_PASSWORD}

And then, in folder with the docker-compose.yml file, create a file named ".env" with key-value pairs of NEXUS_USERNAME and NEXUS_PASSWORD - and the proper values there.

Finally, in the Dockerfile itself, we specify our run command like so:
RUN wget --user $HTTPS_PROXY --password $NO_PROXY

And do NOT declare those as ARGs in the DockerFile.

I haven't found my credentials floating in the resulting build anywhere yet... but I don't know if I'm looking everywhere..... And for the rest of the developers on my project, they just each have to create the .env file with the proper values for them.

darmbrust commented Jul 9, 2018

Commenting partially, so I get notification 5 years from now, when Docker finally decides to give us a tiny step in the direction of proper credential management.... and also, to give an outline of the hack I'm using at the moment, to help others, or to get holes poked in it that I'm unaware of.

In following @mumoshu issue, I finally got the hint of using the predefined-args for build secrets.

So, essentially, I can use docker-compose, with a mapping like this:

  myProject:
    build:
      context: ../myProject/
      args: 
        - HTTPS_PROXY=${NEXUS_USERNAME}
        - NO_PROXY=${NEXUS_PASSWORD}

And then, in folder with the docker-compose.yml file, create a file named ".env" with key-value pairs of NEXUS_USERNAME and NEXUS_PASSWORD - and the proper values there.

Finally, in the Dockerfile itself, we specify our run command like so:
RUN wget --user $HTTPS_PROXY --password $NO_PROXY

And do NOT declare those as ARGs in the DockerFile.

I haven't found my credentials floating in the resulting build anywhere yet... but I don't know if I'm looking everywhere..... And for the rest of the developers on my project, they just each have to create the .env file with the proper values for them.

@sameer-kumar

This comment has been minimized.

Show comment
Hide comment
@sameer-kumar

sameer-kumar Jul 25, 2018

@darmbrust I tried your solution but couldn't make it to work.
Here is my compose yml:
version: "3.3"
services:

  buildToolsImage:
    image: vsbuildtools2017:web-v6
    
    build:
      context: .
      dockerfile: ./vsbuild-web-v6-optimized.dockerfile
      args:
        - CONTAINER_USER_PWD=${CONTAINER_USER_CREDS}

Here is .env file sitting next to yml file:

CONTAINER_USER_CREDS=secretpassword

And, here is my dockerfile:

# escape=`
FROM microsoft/dotnet-framework:4.7.2-sdk
# Add non-root user
CMD ["sh", "-c", "echo ${CONTAINER_USER_PWD}"] 
RUN net user userone ${CONTAINER_USER_PWD} /add /Y

And finally the command to kick this off is like this:

docker-compose -f docker-compose.buildImage.yml build

It builds the image but without using the password stored in .env file.

[Warning] One or more build-args [CONTAINER_USER_PWD] were not consumed

What am I missing here?
Thanks!

sameer-kumar commented Jul 25, 2018

@darmbrust I tried your solution but couldn't make it to work.
Here is my compose yml:
version: "3.3"
services:

  buildToolsImage:
    image: vsbuildtools2017:web-v6
    
    build:
      context: .
      dockerfile: ./vsbuild-web-v6-optimized.dockerfile
      args:
        - CONTAINER_USER_PWD=${CONTAINER_USER_CREDS}

Here is .env file sitting next to yml file:

CONTAINER_USER_CREDS=secretpassword

And, here is my dockerfile:

# escape=`
FROM microsoft/dotnet-framework:4.7.2-sdk
# Add non-root user
CMD ["sh", "-c", "echo ${CONTAINER_USER_PWD}"] 
RUN net user userone ${CONTAINER_USER_PWD} /add /Y

And finally the command to kick this off is like this:

docker-compose -f docker-compose.buildImage.yml build

It builds the image but without using the password stored in .env file.

[Warning] One or more build-args [CONTAINER_USER_PWD] were not consumed

What am I missing here?
Thanks!

@darmbrust

This comment has been minimized.

Show comment
Hide comment
@darmbrust

darmbrust Jul 25, 2018

You have to use one of the https://docs.docker.com/engine/reference/builder/#predefined-args in the docker file. You can't use your own argument names like CONTAINER_USER_PWD.

That's how the trick works, cause docker has special behavior for the predefined-args, in that you can use them without declaring them. And by using them without declaring them, they don't appear to be logged anywhere.

With the docker-compose file, you can map those predefined args to something more reasonably named.

You have to use one of the https://docs.docker.com/engine/reference/builder/#predefined-args in the docker file. You can't use your own argument names like CONTAINER_USER_PWD.

That's how the trick works, cause docker has special behavior for the predefined-args, in that you can use them without declaring them. And by using them without declaring them, they don't appear to be logged anywhere.

With the docker-compose file, you can map those predefined args to something more reasonably named.

@sameer-kumar

This comment has been minimized.

Show comment
Hide comment
@sameer-kumar

sameer-kumar Jul 26, 2018

@darmbrust Yes, that did the trick.
However, don't you think its smelly? Any better recommendations?
Thanks!

@darmbrust Yes, that did the trick.
However, don't you think its smelly? Any better recommendations?
Thanks!

@binarytemple

This comment has been minimized.

Show comment
Hide comment
@binarytemple

binarytemple Jul 26, 2018

@darmbrust

This comment has been minimized.

Show comment
Hide comment
@darmbrust

darmbrust Jul 26, 2018

This entire bug is smelly. I haven't found a better way... there are several other approaches above, but I think all of the other secure ones require standing up a little http server to feed the information into the image. maybe less smelly, but more complexity, more tools, more moving parts.

Not sure that anyone has found a "good" solution... we are all stuck waiting on the docker people to do something about it... don't hold your breath, since this bug was written in 2015, and they haven't even proposed a roadmap yet, much less a solution.

This entire bug is smelly. I haven't found a better way... there are several other approaches above, but I think all of the other secure ones require standing up a little http server to feed the information into the image. maybe less smelly, but more complexity, more tools, more moving parts.

Not sure that anyone has found a "good" solution... we are all stuck waiting on the docker people to do something about it... don't hold your breath, since this bug was written in 2015, and they haven't even proposed a roadmap yet, much less a solution.

@binarytemple

This comment has been minimized.

Show comment
Hide comment
@binarytemple

binarytemple Jul 26, 2018

binarytemple commented Jul 26, 2018

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jul 26, 2018

Contributor

@binarytemple Everyone who has ever worked on Docker/moby (as in the engineers behind it) know exactly what the problem is and have even run up against it.

Volumes is a solution that is itself incredibly leaky.
There is a proposal, mentioned up the comment stream a bit, that attempts to solve this in a reasonable manner (#33343)

The main thing here is providing the "right" abstraction rather than "any abstraction that happens to work"... we of course know this is painful for many in more than just this case.

Lots of work has been done on the builder lately which isn't necessarily visible yet, but the fruits of this effort will begin to show up in coming months.
To begin with, Docker 18.06 ships with an alternative builder implementation backed by https://github.com/moby/buildkit.
You may think "how does this help me?". Buildkit provides a lot of low-level primitives that enables us to be much more flexible in the Docker builder. Even so much as to be able to provide your own build parser (which can be anything from an enhanced Dockerfile parser to something completely different). Parsers are specified at the top of the "Dockerfile" and are just any image you want to use to parse the file.

If you really want to see something right now, you can take buildkit itself and run with it today, it sits on top of containerd, you can build a custom integration pretty quickly.

Contributor

cpuguy83 commented Jul 26, 2018

@binarytemple Everyone who has ever worked on Docker/moby (as in the engineers behind it) know exactly what the problem is and have even run up against it.

Volumes is a solution that is itself incredibly leaky.
There is a proposal, mentioned up the comment stream a bit, that attempts to solve this in a reasonable manner (#33343)

The main thing here is providing the "right" abstraction rather than "any abstraction that happens to work"... we of course know this is painful for many in more than just this case.

Lots of work has been done on the builder lately which isn't necessarily visible yet, but the fruits of this effort will begin to show up in coming months.
To begin with, Docker 18.06 ships with an alternative builder implementation backed by https://github.com/moby/buildkit.
You may think "how does this help me?". Buildkit provides a lot of low-level primitives that enables us to be much more flexible in the Docker builder. Even so much as to be able to provide your own build parser (which can be anything from an enhanced Dockerfile parser to something completely different). Parsers are specified at the top of the "Dockerfile" and are just any image you want to use to parse the file.

If you really want to see something right now, you can take buildkit itself and run with it today, it sits on top of containerd, you can build a custom integration pretty quickly.

@tonistiigi

This comment has been minimized.

Show comment
Hide comment
@tonistiigi

tonistiigi Jul 26, 2018

Member

Secret mounts support was added to buildkit in moby/buildkit#522 . They appear strictly on tmpfs, are excluded from build cache and can use a configurable data source. No PR yet that exposes it in a dockerfile syntax but should be a simple addition.

Member

tonistiigi commented Jul 26, 2018

Secret mounts support was added to buildkit in moby/buildkit#522 . They appear strictly on tmpfs, are excluded from build cache and can use a configurable data source. No PR yet that exposes it in a dockerfile syntax but should be a simple addition.

@BenoitNorrin

This comment has been minimized.

Show comment
Hide comment
@BenoitNorrin

BenoitNorrin Jul 27, 2018

There are 2 solutions to build images with secrets.

Multi-stage build :

FROM ubuntu as intermediate
ARG USERNAME
ARG PASSWORD
RUN git clone https://${USERNAME}:${PASSWORD}@github.com/username/repository.git

FROM ubuntu
# copy the repository form the previous image
COPY --from=intermediate /your-repo /srv/your-repo

Then : docker build --build-arg USERNAME=username --build-arg PASSWORD=password my-image .

Using a image builder : docker-build-with-secrets

There are 2 solutions to build images with secrets.

Multi-stage build :

FROM ubuntu as intermediate
ARG USERNAME
ARG PASSWORD
RUN git clone https://${USERNAME}:${PASSWORD}@github.com/username/repository.git

FROM ubuntu
# copy the repository form the previous image
COPY --from=intermediate /your-repo /srv/your-repo

Then : docker build --build-arg USERNAME=username --build-arg PASSWORD=password my-image .

Using a image builder : docker-build-with-secrets

@binarytemple

This comment has been minimized.

Show comment
Hide comment
@binarytemple

binarytemple Jul 27, 2018

@BenoitNorrin sorry, but you've exposed that password to every process on the host system. Unix security 101 - don't put secrets as command arguments.

binarytemple commented Jul 27, 2018

@BenoitNorrin sorry, but you've exposed that password to every process on the host system. Unix security 101 - don't put secrets as command arguments.

@BenoitNorrin

This comment has been minimized.

Show comment
Hide comment
@BenoitNorrin

BenoitNorrin Jul 27, 2018

Yes but there are some usages where security matters a little less:

  • you want to build on your own computer
  • you build on your entreprise CI server (like jenkins). Most of the time it's about having access to a private repository (nexus, git, npm, etc), so your CI may have her own credentials for that.
  • you can use a VM created from docker-machine and remove it after.

BenoitNorrin commented Jul 27, 2018

Yes but there are some usages where security matters a little less:

  • you want to build on your own computer
  • you build on your entreprise CI server (like jenkins). Most of the time it's about having access to a private repository (nexus, git, npm, etc), so your CI may have her own credentials for that.
  • you can use a VM created from docker-machine and remove it after.
@Yajo

This comment has been minimized.

Show comment
Hide comment
@Yajo

Yajo Jul 27, 2018

If that's the only problem, @binarytemple, then simply adding the flag docker image build --args-file ./my-secret-file should be a pretty easy fix for this whole problem, isn't it? 🤔

Yajo commented Jul 27, 2018

If that's the only problem, @binarytemple, then simply adding the flag docker image build --args-file ./my-secret-file should be a pretty easy fix for this whole problem, isn't it? 🤔

@binarytemple

This comment has been minimized.

Show comment
Hide comment
@binarytemple

binarytemple Jul 27, 2018

@Yajo could be, yes it's at least a workaround until buildkit ships with secrets mount. Good suggestion. Thanks. B

@Yajo could be, yes it's at least a workaround until buildkit ships with secrets mount. Good suggestion. Thanks. B

@pvanderlinden

This comment has been minimized.

Show comment
Hide comment
@pvanderlinden

pvanderlinden Jul 27, 2018

Unfortunately most of the workarounds mentioned in these and the many other tickets still expose the secrets to the resulting image, or only works with specific languages where you only need dependencies during compile time and not during installation.

@binarytemple that will never happen, the docker maintainers have already killed at least one PR fully documented and fully implemented of a safe secret feature. Given the rest of history (this 3 year old ticket isn't the oldest and definitely not the only ticket/PR on this topic) I think it's safe to say the docker maintainers don't understand the need for security, which is a big problem.

pvanderlinden commented Jul 27, 2018

Unfortunately most of the workarounds mentioned in these and the many other tickets still expose the secrets to the resulting image, or only works with specific languages where you only need dependencies during compile time and not during installation.

@binarytemple that will never happen, the docker maintainers have already killed at least one PR fully documented and fully implemented of a safe secret feature. Given the rest of history (this 3 year old ticket isn't the oldest and definitely not the only ticket/PR on this topic) I think it's safe to say the docker maintainers don't understand the need for security, which is a big problem.

@caub

This comment has been minimized.

Show comment
Hide comment
@caub

caub Aug 14, 2018

The biggest pain point is secret rotations for me

you've to maintain a graph of secret to services dependencies, and update twice each service (to get back to original secret name)

listing secrets from services doesn't seem to be easy (I gave up after some attempts around docker service inspect --format='{{.Spec.TaskTemplate.ContainerSpec.Secrets}}' <some_service>), listing services dependencies from docker secret inspect <secret_name> would be useful too. So I just maintain that (approximate) graph manually for now.

You also have to specify the secret destination, when it's not the default /run/secrets/<secret_name> in the docker service update command

I just hope for a simpler way to rotate secrets

caub commented Aug 14, 2018

The biggest pain point is secret rotations for me

you've to maintain a graph of secret to services dependencies, and update twice each service (to get back to original secret name)

listing secrets from services doesn't seem to be easy (I gave up after some attempts around docker service inspect --format='{{.Spec.TaskTemplate.ContainerSpec.Secrets}}' <some_service>), listing services dependencies from docker secret inspect <secret_name> would be useful too. So I just maintain that (approximate) graph manually for now.

You also have to specify the secret destination, when it's not the default /run/secrets/<secret_name> in the docker service update command

I just hope for a simpler way to rotate secrets

@BretFisher

This comment has been minimized.

Show comment
Hide comment
@BretFisher

BretFisher Aug 14, 2018

@caub here's some CLI help:

Docker docs for formatting help come up with the rest of your inspect format:

docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Secrets}}{{println .SecretName}}{{end}}'

That'll list all secret names in a service. If you wanted both name and ID, you could:

docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Secrets}}{{println .SecretName .SecretID}}{{end}}' nginx

I always have my CI/CD (service update commands) or stack files hardcode the path so you don't have that issue on rotation.

With labels you can have CI/CD automation identify the right secret if you're not using stack files (without needing the secret name, which would be different each time).

@caub here's some CLI help:

Docker docs for formatting help come up with the rest of your inspect format:

docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Secrets}}{{println .SecretName}}{{end}}'

That'll list all secret names in a service. If you wanted both name and ID, you could:

docker service inspect --format='{{range .Spec.TaskTemplate.ContainerSpec.Secrets}}{{println .SecretName .SecretID}}{{end}}' nginx

I always have my CI/CD (service update commands) or stack files hardcode the path so you don't have that issue on rotation.

With labels you can have CI/CD automation identify the right secret if you're not using stack files (without needing the secret name, which would be different each time).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment