Secrets: write-up best practices, do's and don'ts, roadmap #13490

thaJeztah opened this Issue May 26, 2015 · 139 comments


None yet

Handling secrets (passwords, keys and related) in Docker is a recurring topic. Many pull-requests have been 'hijacked' by people wanting to (mis)use a specific feature for handling secrets.

So far, we only discourage people to use those features, because they're either provenly insecure, or not designed for handling secrets, hence "possibly" insecure. We don't offer them real alternatives, at least, not for all situations and if, then without a practical example.

I just think "secrets" is something that has been left lingering for too long. This results in users (mis)using features that are not designed for this (with the side effect that discussions get polluted with feature requests in this area) and making them jump through hoops just to be able to work with secrets.

Features / hacks that are (mis)used for secrets

This list is probably incomplete, but worth a mention

  • Environment Variables. Probably the most used, because it's part of the "12 factor app". Environment variables are discouraged, because they are;
    • Accessible by any proces in the container, thus easily "leaked"
    • Preserved in intermediate layers of an image, and visible in docker inspect
    • Shared with any container linked to the container
  • Build-time environment variables (#9176, #15182). The build-time environment variables were not designed to handle secrets. By lack of other options, people are planning to use them for this. To prevent giving the impression that they are suitable for secrets, it's been decided to deliberately not encrypt those variables in the process.
  • Mark .. Squash / Flatten layers. (#332, #12198, #4232, #9591). Squashing layers will remove the intermediate layers from the final image, however, secrets used in those intermediate layers will still end up in the build cache.
  • Volumes. IIRC some people were able to use the fact that volumes are re-created for each build-step, allowing them to store secrets. I'm not sure this actually works, and can't find the reference to how that's done.
  • Manually building containers. Skip using a Dockerfile and manually build a container, commiting the results to an image
  • Custom Hacks. For example, hosting secrets on a server, curl-ing the secrets and remove them afterwards, all in a single layer. (also see

So, what's needed?

  • Add documentation on "do's" and "don'ts" when dealing with secrets; @diogomonica made some excellent points in #9176 (comment)
  • Describe the officially "endorsed" / approved way to handle secrets, if possible, using the current features
  • Provide roadmap / design for officially handling secrets, we may want to make this pluggable, so that we don't have to re-invent the wheel and use existing offerings in this area, for example, Vault, Keywiz, Sneaker

The above should be written / designed with both build-time and run-time secrets in mind

@calavera created a quick-and-dirty proof-of-concept on how the new Volume-Drivers (#13161) could be used for this;

Note: Environment variables are used as the de-facto standard to pass configuration/settings, including secrets to containers. This includes official images on Docker Hub (e.g. MySQL, WordPress, PostgreSQL). These images should adopt the new 'best practices' when written/implemented.

In good tradition, here are some older proposals for handling secrets;

  • "Add private files support" #5836
  • "Add secret store" #6075
  • "Continuation of the docker secret storage feature" #6697
  • "Proposal: The Docker Vault" #10310

ping @ewindisch @diogomonica @NathanMcCauley This is just a quick write-up. Feel free to modify/update the description if you think that's nescessary :)


This is useful infos:


As is this:



@dreamcat4 there are some plans to implement a generic "secrets API", which would allow you to use either Vault, or Keywiz or you-name-it with Docker, but all in the same way. It's just an early thought, so it will require additional research.


@thaJeztah Yep Sorry I don't want to detract from those efforts / discussion in any way. I am more thinking maybe it's a useful exercise also (as part of that longer process and while we are waiting) to see how far we can get right now. Then it shows up more clearly to others the limits and deficiencies in current process. What underlying is missing and needed the most to be added to improve the secrets.

Also it's worth considering about the different situations of run-time secrets VS build-time secrets. For which there is also an area overlap area.

And perhaps also (for docker) we may also be worth to consider limitations (pros/cons) between solutions that provide a mechanism to handle the secrets "in-memory". As opposed to a more heavily file-based secrets methods or network based ones e.g. local secrets server. Which are the current hacks on the table (until proper secrets API). This can help us to understand some of the unique value (for example of stronger security) added by a docker secrets API which could not otherwise be achieved by using hacks on top of the current docker feature set. However I am not a security expert. So I cannot really comment on those things with such a great certainty.


@dreamcat4 yes, you're right; for the short term, those links are indeed useful.

Also it's worth considering about the different situations of run-time secrets VS build-time secrets. For which there is also an area overlap area.

Thanks! I think I had that in my original description, must have gotten lost in the process. I will add a bullet

However I am not a security expert.

Neither am I, that's why I "pinged" the security maintainers; IMO, this should be something written by them ๐Ÿ˜‡


@thaJeztah great summary. I'll try to poke at this whenever I find some time.


@diogomonica although not directly related, there a long open feature request for forwarding SSH key agent during build; #6396 given the number of comments, it would be good to give that some thinking too. (If even to take a decision on it whether or not it can/should be implemented)


Assuming you could mount volumes as user other than root (I know it's impossible, but humour me), would that be a favourable approach to getting secrets into containers?

If so, I'd advocate for an alternative to -v host_dir:image_dir that expects the use of a data-only container and might look like -vc host_dir:image_dir (ie. volume-copy) wherein the contents of host_dir are copied into the image_dir volume on the data-only container.

We could then emphasize a secure-data-only containers paradigm and allow those volumes to be encrypted

@hhorak hhorak referenced this issue in sclorg/mysql-container Sep 9, 2015

RFE: Better way to pass secret data into container #91

kepkin commented Nov 13, 2015

I've recently read a good article about that from @jrslv where he propose to build a special docker image with secrets just to build your app, and than build another image for distribution using results from running build image.

So you have two Dockerfiles:

  • (here you simply copy all your secrets)
  • Dockerfile.dist (this one you will push to registry)

Now we can build our distribution like that:

# !/bin/sh
docker build -t hello-world-build -f .
docker run hello-world-build >build.tar.gz 
docker build -t hello-world -f Dockerfile.dist ^

Your secrets are safe, as you never push hello-world-build image.

I recommend to read @jrslv article for more details


Thanks for sharing @kepkin !
Just finished reading the article. Really concise!

I like the idea of exporting the files and loading them in through a separate Dockerfile. It feels like squashing without the "intermediate layers being in the build cache" issue.

However, I'm nervous that it'll complicate development and might require a third Dockerfile for simplicity.


@kepkin no offense but that doesn't make any sense. Secrets are definitely not safe, since they are in the tarball and the tarball is being ADDed to production image -- even if you remove the tarball, without squashing, it will leak in some layer.


@TomasTomecek if I understand the example correctly, the tarball is not the image-layers, but just the binary that was built inside the build container. See for example; (no secrets involved here, but just a simple example of a build container)

kepkin commented Nov 25, 2015

@TomasTomecek I'm talking about secrets for building Docker image. For instance, you need to pass ssh key to checkout source code from your private GitHub repository. And the tarball contains only build artifacts but doesn't contain GitHub key.


@kepkin right, now I read your post again and can see it. Sorry about that. Unfortunately it doesn't solve the issue when you need secrets during deployment/building the distribution image (e.g. fetching artifacts and authenticating with artifact service). But it's definitely a good solution for separation between build process and release process.

kepkin commented Nov 25, 2015

@TomasTomecek that's exactly how I fetch artifacts actually.

In image I download some binary dependencies from Amazon S3 image which require AWS key & secret. After retrieving and building, I create a tarball with everything I need.

jacobdr commented Nov 27, 2015

Is there a canonical "best practices" article -- the "Do"s as apprised to the "Don'ts" -- that y'all would recommend reading?

afeld commented Nov 27, 2015

Worth noting (for anyone else like me that is stumbling upon this) that Docker Compose has support for an env_file option.


@afeld docker itself has this feature as well, see but those env-vars will still show up in the same places, so don't make a difference w.r.t "leaking"


I've stumbled across this cheat sheet:


@kepkin this is how I pass an ssh-key to docker build:

# serve the ssh private key once over http on a private port.
which ncat
if [ "$?" = "0" ]; then
  ncat -lp 8000 < $HOME/.ssh/id_rsa &
  nc -lp 8000 < $HOME/.ssh/id_rsa &
docker build --no-cache -t bob/app .
kill $nc_pid || true

and inside the Dockerfile where is the docker gateway IP:

  mkdir -p /root/.ssh && \
  curl -s > /root/.ssh/id_rsa && \
  chmod 600 /root/.ssh/id_rsa && chmod 700 /root/.ssh && \
  ssh-keyscan -t rsa,dsa > ~/.ssh/known_hosts && \
  git clone --depth 1 --single-branch --branch prod git@github.bob/app.git . && \
  npm i --production && \
  ... && \
  rm -rf /root/.npm /root/.node-gyp /root/.ssh

If someone has something simpler let us know.


So what's the current status of this?

All summer there were long conversational chains, indicating just how widespread this concern is. This was filed in May, and it's still open. For instance, how would I set the password for Postgres?


@thaJeztah What can be done to move this forward? I guess many eyes throughout different downstream projects are on this issue... ej. rancher/rancher#1269


I guess what is being done here is kept secret :D


This the biggest pain point for us for integrating Docker into our production stack. Is there a roadmap or another doc somewhere that points to any progress towards this ?


Some relevant content on this topic from k8s.


What do you think of this as a potential way of addressing run-time secrets?


I feel like this issue would be best addressed by concentrating on a few scenarios that need to be supported, and making sure there's a set of instructions for each one. How they get implemented is less important than whether at the end of the process there's a coherent set of features that can be combined to fill the need.

A few that I've seen referred to that seem to be pretty legitimate concerns include:

Run-time Credentials

  • User/password information coordinated between two containers that share a link
  • Information is easy to keep out of your git repository
  • Information is easy to keep out of your pushed images (what about local containers?)
  • Information is easy to keep out of .bash_history (possibly a bridge too far?)
  • Some applications expect secrets as part of a configuration file that contains other information
  • Some applications expect secrets as an environment variable
  • Some applications allow both

When I say 'easy' I mean that there is an ergonomically sane approach to handling these variables that protects the user from accidentally doing the wrong thing and triggering a security bulletin. The stress of the experience often becomes associated with (read: blamed on) the tools involved in the mistake.

Build-time Credentials

  • Project is built from one or more private repositories (ex: package.json allows git urls)
  • Builder may be behind a password protected proxy
  • Builder may be using a password protected cache
  • End user only cares about a working image (ie, they will use pull or FROM, never docker build)
  • Information is easy to keep out of your pushed images

1st Edit:

Documentation of what is and is not 'leaked' into a typical image, container

  • What files wind up in the image (Just the COPY and ADD? anything else?)
  • What docker-machine retains after an image is built (especially boot2docker, but what about others?)
  • How environment and command line variables are captured in the image, and where they are captured
  • Expectations on PR issuers regarding changing these behaviors

I feel like I'm missing a couple of big ones here. Anybody got something I forgot?


API Keys for whatever json services.

For example (and this is my real use-case), Docker build compiles a program, the API Key is necessary to authenticate me and upload the build product(s) to


@dreamcat4 I could be way off from what you're saying, but here goes:

Are you talking about using docker images for Continuous Deployment builds, and pushing the build artifacts to an archive at the end of a successful build? Personally I prefer doing this farther up stream (eg, a post-build script in Jenkins), but if you're cross compiling that might be a bit trickier.

In my world the build agent just builds binaries/archives and retains them as 'artifacts' of the build process, and something else pushes those out to infrastructure, tags the git repository, etc. That gives me an emergency backup of the artifacts if I have a production issue and, say, my npm, docker, or Artifactory repository is down for upgrades, or the network is glitching on me.


The point I was trying to make was about usage of API keys in general. There are many different and varied online JSON / rest services which a container may need to interact with (either at build time or run time)... which requires API keys. Does not have to be specifically about build related.


@dreamcat oh, so auth tokens for REST endpoints? Do you think those are handled substantially differently than, say, your postgres password in a conf file, or would you handle them similarly?


Yeah I think those two types should be considered differently in terms of evaluating thier base minimum level of security.

API Auth tokens tend to often be:

  • Are not passwords
  • Can be revoked
  • Some (a much fewer subset) are single use - throw away. And essentially invalidate themselved.
  • Often API services are limited in their scope to just a subset of functionality. (ie read-only, or can only trigger a specific action).

Passwords tend to be / are often:

  • For a more fuller account access / control.
  • Once compromised, may be changed by attacker to something else (lock out). Or other backdoor inserted (such as db modification of other acounts held in the database, in the case of SQL).
  • Being a password makes a substantially much high risk of 'same password reuse' among other accounts. Wheras API keys tend to be always unique and not-usable for anything else.

So that does not necessarily mean the secrets solution must be different for those 2 types. Just that the acceptable minimum baseline level of security may be a little bit lower for API keys.

This minimum level matters if having strong security is more complex / problematic to setup. (which may be true here in the case of docker secrets, or not depending how feasible / elegant the solution).

And occasionally API keys of passwords can have stronger / weaker security. Just that if one-size-fits-all is not possible.

For example - my bintray API key: that is held in same .git repo as my Dockerfile. So to keep it secure is held in PRIVATE git repo (accessed via SSH). So gaining access is to the API key relatively well protected there. However without docker having any build-in secrets functionality / protections of its own, the built docker image always includes the API key in plain text. Therefore the resulting Docker build image must be kept private like the git repository... which has a knock-on (undesirable effect) that nobody else can publically view / see the build logs / build status there.

Now that is not ideal in many ways. But the overall solution is simple enough and actually works (as in: yesterday). If there were a better mechanism made in future, I would consider switching to it. But not if that mechanism was significantly more costly / complex to setup than the current solution I have already made. So therefore extra-strong security (althogh welcome) might be overkill in the case of just 1 api key. Which just merely needs to be kept out of docker's image layers cache with some kind of a new NOCAHCE option / Dockerfile command.

Wheras a password needs something like vault or ansible-vault, and to be encrypted with yet-another-password or other strongly secure authentication mechanism. (Which we would hope not but may be a complex thing to setup).


I think a client/server model (like in vault) for managing and streamlining (read: auditing, break-glass) all the secrets related stuff would be good practice and would cover most of the use cases, if implementation was done thoughtfully. I, personally, am not a fan of adopting a non-holistic approach, because this is an opportunity to rise the bar in best practices.

This implies a long running client (responsibility of the person that deploys an image) and/or a build-time client (responsibility of the builder). Maybe the former one could be transferred to the docker daemon somehow which provides authorized secrets at run time.


Indeed - I wholeheartedly agree with the previous comment. Not that I don't admire the creative ways in which people are solving the problem, but I don't think this is how it needs to be - let's try and think of a solution that could be used both during CI/D and runtime, as well as taking into account that containers might be orchestrated by Mesos/Kubernetes, etc.


Well, I think a bit of documentation would still be useful here, since Docker presents a few extra kinks in the problem space.

It looks like maybe the Vault guys are also looking at this from their end. I think this ticket is the one to watch:


Maybe this is something that could be collaborated upon.



Maybe this is something that could be collaborated upon.



+1 Docker + Hashi Corp Vault

gittycat commented Feb 8, 2016

Sorry but I don't like how the solutions are getting more complex as more people pitch in. Hashi Corp Vault for instance is a full client server solution with encrypted back end storage. That adds considerable more moving parts. I'm sure some use cases demand this level of complexity but I doubt most would. If the competing solution is to use host environment variables I'm fairly sure which will end up being used by the majority of developers.

I'm looking at a solution that covers development (eg: github keys) and deployment (eg: nginx cert keys, db credentials). I don't want to pollute the host with env vars or build tools and of course no secrets should end up in github (unencrypted) or a docker image directory, even a private one.


@gittycat I agree with you in the sense that there are probably several distinct use-cases. Whereby some of the solutions should be simpler than other ones.

We certainly should want to avoid resorting to ENV vars though.

My own preference is leaning towards the idea that simple key storage could be achieved with something akin to ansible's "vault" mecianism. Where you have an encrypted text file held within the build context (or sources outside / alongside the build context). Then an unlock key can unlock whatever plaintext passwords or API keys etc from that file.

I'm just saying that after using anisible's own "vault" solution. Which is relatively painless / simple. Hashicorp's vault is more secure, but it is also harder to setup and just generally more complex. Although I don't know of any technical reason why you couldn't still ultimately use that underneath as the backend (hide it / simplify it behind a docker-oriented commandline tool).

I would suggest local file storage because it avoids needing to setup some complex and potentially unreliable HTTP key storage server. Secrets storage is very much a security matter so should be available all users, not just for enterprises. Just my 2 cents opinion.


+1 to a local file storage backend, for more advanced use cases, I would however prefer the full power of a Hashicorp Vault - like solution. When we are talking deployment, in an organisation, the argument is, that those people who provide and control secrets are other persons than those who use them. This is a common security measure to keep the circle of persons with controlling power limited to very trusted security engineers...

@david-a-wheeler david-a-wheeler referenced this issue in linuxfoundation/cii-best-practices-badge Feb 12, 2016

Add to criteria: No private credential leaks in the repo #159

gtmtech commented Feb 18, 2016

Dont know if this is any use or would work, but here's a bit of a leftfield suggestion for solving the case where I want to inject a secret into a container at runtime (e.g. a postgres password)

If I could override at docker run time the entrypoint and set it to a script of my choosing e.g. /sbin/get_secrets, which after getting secrets from a mechanism of my choosing (e.g. KMS), it would exec the original entrypoint (thus becoming a mere wrapper whose sole purpose was to set up environment variables with secrets in them INSIDE the container. Such a script could be supplied at runtime via a volume mount. Such a mechanism would not involve secrets ever being written down to disk (one of my pet hates), or being leaked by docker (not part of docker inspect), but would ensure they only exist inside the environment of process 1 inside the container, which keeps the 12-factorness.

You can already do this (I believe) if entrypoint is not used in the image metadata, but only cmd is, as entrypoint then wraps the command. As mentioned the wrapper could then be mounted at runtime via a volmount. If entrypoint is already used in the image metadata, then I think you cannot accomplish this at present unless it is possible to see what the original entrypoint was from inside the container (not the cmdline override) - not sure whether you can do that or not.

Finally it would I think even be possible to supply an encrypted one-time key via traditional env var injection with which the external /sbin/get_secrets could use to request the actual secrets (e.g. the postgres password), thus adding an extra safeguard into docker leaking the one-time key.

I cant work out if this is just layers on layers, or whether it potentially solves the issue.. apologies if just the first.

gtmtech commented Feb 19, 2016

@thaJeztah - I can confirm the solution I pose above works, secrets are manifested w/out being leaked by docker, they exist only in-memory for process 1 via environment variables which is perfectly 12-factor compliant, but they DO NOT show up in docker api under docker inspect, or anywhere else because they are specific to process 1. Zero work is required in the image for this to work. In my case I compiled a golang static binary to do the fetching of the secrets, so it could be volume mounted and overrode the entrypoint with this, the binary issues a sys exec to transfer control to the image-defined entrypoint when finished.

kaos commented Feb 19, 2016

@gtmtech Interesting. Would be interested in how you found out what the original entrypoint was from your get secrets binary..


Maybe an example code folder would make the aproach a bit easier to demonstrate / understand.

gtmtech commented Feb 19, 2016

Example code and working scenarios here @dreamcat4 @kaos >

asokani commented Feb 19, 2016

I may be wrong, but why these complicated methods? I rely on standard unix file permissions. Hand over all secrets to docker with -v /etc/secrets/docker1:/etc/secrets readable only by root and then there's a script running at container startup as root, which passes the secrets to appropriate places for relevant programs (for example apache config). These programs drop root permissions at startup so if hacked, they cannot read the root-owned secret later. Is this method I use somehow flawed?

kaos commented Feb 19, 2016

Thanks @gtmtech :)
Unfortunately, we have no standard entrypoint, nor am I able to run docker inspect prior to the docker run in a controlled manner.. But I like your approach.


I may be wrong, but why these complicated methods? I rely on standard unix file permissions. Hand over all secrets to docker with -v /etc/secrets/docker1:/etc/secrets readable only by root and then there's a script running at container startup as root, which passes the secrets to appropriate places for relevant programs (for example apache config). These programs drop root permissions at startup so if hacked, they cannot read the root-owned secret later. Is this method I use somehow flawed?

I agree and think this approach ^^ should be generally recommended as best way for RUNTIME secrets. Unless anybody else here has a strong objection against that. After which can then subsequently also list any remaining corner cases (at RUNTIME) which are not covered by that ^^.

Unfortunately I can't see the secret squirrel taking off because its simply too complicated for most regular non-technical persons to learn and adopt as some popular strategy.

So then that leaves (you've probably guessed it already)...
Build-time secrets!

But I think thats a progress! Since after a long time not really getting anywhere, maybe cuts things in half and solves approx 45-50% of the total problem.

And if theres still remaining problems around secrets, at least they will be more specific / focussed ones and can keep progressing / takle afterwards.

gtmtech commented Feb 19, 2016

Yep I wont go into too much details, but these approaches would never work for a situation I am currently working with, because I need a higher level of security than they provide. E.g. no secrets unencrypted on disk, no valid decryption keys once theyve been decrypted in the target process, regular encryption rotation, and single repository for encrypted secrets (and not spread across servers). So its more for people who have to do that level of security that I've suggested a possible approach.

secret_squirrel is anyway a hack in a space where I cant see any viable solutions yet, around docker not yet providing a secrets api, or a pluggable secrets-driver, which hopefully they will at some point, but perhaps it serves to illustrate that setting ENV vars inside the container before process exec, but not as part of docker create process (or metadata) is a secure way of being 12-factor compliant with secrets, and maybe the docker development community can use that idea when they start to build out a secrets-api/driver if they think its a good one!

Happy dockering!

mdub commented Feb 20, 2016

We've been using the kind of approach that @gtmtech, describes, with great success. We inject KMS-encrypted secrets via environment variables, then let code inside the container decrypt as required.

Typically that involves a simple shim entrypoint, in front of the application. We currently implementing that shim with a combination of shell and a small Golang binary (, but I like the sound of the pure-Go approach.


@gtmtech @mdub I definitely would be pleased to see more of this.
@dreamcat4 I think the definition of "complicated" might be path dependent, which obviously is quite ok. Yet, it probably cannot be an abstractable judgment. Therefore, however, a security wrapper within the docker container doesn't seem something overly complicated to me at the design level. Another aspect is best practices: Those need to be looked at not from a developer-only perspective but from an operation perspective.
my 2 cents


Vault +1


Vault -1. Vault has some operational characteristics (unsealing) that make it really undesirable for a lot of people.

Having a pluggable API would make the most sense.


Theres also ansible's vault. That is rather a different beast.


@gtmtech thanks for the suggestion, it inspired me to write this entrypoint:


if [ -d "/var/secrets" ]; then
  for file in /var/secrets/*
    if [ -f $file ]; then
      file_contents=$(cat $file)
      filename=$(basename "$file")
      echo "export $capitalized_filename=$file_contents" >> $tmpfile

  source $tmpfile
  rm -f $tmpfile

exec "$@"

I just add it into the Dockerfile like this (don't forget to chmod + x on it):

ENTRYPOINT ["/app/"]

And voila. ENV vars available at runtime. Good enough :)

blackjid commented Mar 3, 2016

If I understand correctly, the /var/secrets dir should be mounted through volumes right??
Also, when there are comment about secrets not being written to disc, how bad is write them to disc and then delete them???


Nice one! You should use shred to safely delete the file though.

On Thursday, March 3, 2016, Juan Ignacio Donoso

If I understand correctly, the /var/secrets dir should be mounted through
volumes right??
Also, when there are comment about secrets not being written to disc, how
bad is write them to disc and then delete them???

Reply to this email directly or view it on GitHub
#13490 (comment).

Rui Marinho

mdub commented Mar 3, 2016

Inspired by @gtmtech's "secret-squirrel", I've extended my secret-management tool "shush" to make it usable as an image entry-point:

ADD shush_linux_amd64 /usr/local/bin/shush
ENTRYPOINT ["/usr/local/bin/shush", "exec", "--"]

This decrypts any KMS_ENCRYPTED_xxx envariables, and injects the results back into the environment.


So the thread begins with DO NOT DO ANY OF THESE THINGS.....

... but I don't see any PLEASE DO THESE THINGS INSTEAD...only various proposals/hacks that have mostly been rejected/closed.

What IS the official best-practice for now? As a docker user it's somewhat frustrating to see a long list of things we shouldn't do but then have no official alternatives offered up. Am I missing something? Does one not exist? I'm sure things are happening behind-the-scenes and that this is something that the docker team is working on, but as of right now, how do we best handle secret management until a canonical solution is presented?

Vanuan commented Mar 21, 2016

As far as I understood, if you need secrets in runtime, you should either use volumes (filesystem secrets) or some services like HashiCorp Vault (network secrets).

For build-time secrets, it's more complicated.
Volumes are not supported at build time, so you should use containers to execute commands that modify filesystem, and use docker commit.

So what's missing is an ability to manage secrets on the build time using nothing except a Dockerfile, without the need to use docker commit.

Some people even say that using filesystem for secrets is not secure, and that docker daemon should provide some API to provide secrets securely (using network/firewall/automounted volume?). But nobody even have an idea of what this API would look like and how one would use it.


When I think of short comings of env vars, I think of non-docker specific issues such as:

  1. Aggregating logs catching all the env vars or a forgotten phpinfo left on a production web server - so be careful with the secrets and config correctly.
  2. Maybe a trojan that exposes env vars - so dont run untrusted software.
  3. Attacks that exploit weakness such as sql injection - so validate input and use a web app firewall.

The weaknesses presented at the top this thread:

Accessible by any process in the container, thus easily "leaked"

Cross apply 1 & 2 from above. Legit but addressed with being careful right? Plus, your docker container runs far fewer processes than a full stack web server.

What about config in env var, but secret env vars have encrypted values and the app has the key in code? This is just obfuscation, because the key is in code, but would require exploits to gain access to both the key and env vars. Maybe use configuration management to manage the key on the docker host rather than in the app code. May help with rouge processes and accidental leaks but obviously not injection attacks from someone who has the key.

Preserved in intermediate layers of an image, and visible in docker inspect

Are people baking env vars in to docker images rather than setting at run time or am I misunderstanding this one. Never back secrets into artifacts right? Yes sudo docker inspect container_name gives the env var, but if your on my production server then iv already lost. sudo docker inspect image_name does not have access my env vars set at run time.

Shared with any container linked to the container

How about don't use links and the new networking instead?

The only issue that seems like a docker issue and not universal is links...


Put me in the camp of folk who need a good way to handle secrets during docker build. We use composer for some php projects and reference some private github repos for dependencies. This means if we want to build everything inside of containers then it needs ssh keys to access these private repos.

I've not found a good and sensible way to handle this predicament without defeating some of the other things that I find beneficial about docker (see: docker squash).

I've now had to regress in building parts of the application outside of the container and using COPY to bring in the final product into the container. Meh.

I think docker build needs some functionality to handle ephemeral data like secrets so that they don't find their way into the final shipping container.

Vanuan commented Mar 23, 2016

I think docker build needs some functionality to handle ephemeral data like secrets

This is a philosophical rather a technical problem. Such ephemeral data would defeat docker's essential benefit: reproducibility.

Docker's philosophy is that your Dockerfile along with a context is enough to build an image.
If you need a context to be outside of resulting image, you should fetch it from network and skip writing to filesystem. Because every Dockerfile line results in a filesystem snapshot.

If secrets should not be part of an image, you could run an ephemeral container, which would mirror/proxy all your secret-protected resources and provide secret-less access. Mirroring, btw has another rationale:

You can share ssh key itself as well, but you wouldn't be able to control its usage.


@bhamilton-idexx if you make sure that the authentication to your private repositories works with a short lived token you don't have to worry about the secret being persisted in the docker image.
You have the build system generate a token with a ttl of 1 hour, make this available as an environment variable to the docker build.
You build can fetch the required build details, but the secret times out shortly after your builds completes, closing that attack vector.

Tebro commented Mar 24, 2016

Been reading a bunch of these threads now and one feature that would solve some usecases here and would have usecases outside of secrets is a --add flag for docker run that copies a file into the container, just like the ADD statement in Dockerfiles

Vanuan commented May 6, 2016

What's not clear is what should be kept secret, app-id, user-id or both.

Vanuan commented May 7, 2016 edited

Ok, the answer is both
But it's still not clear why it any more secure than just plain firewalled access.
Maybe it's that each host secret should be tied with application (policy) secret?
I.e. if you have an access to host's secret you'd be able to access certain applications if you know their secret names?

Now we need to store 2 tokens somewhere?

jaredm4 commented May 7, 2016 edited

@Vanuan They should both be kept as secret as possible, yes.

The app-id's main purpose is to restrict access to certain secrets inside Vault via Policies. Anyone with access to the app-id gains access to that app-id's policies' secrets. The app-id should be provided by your deployment strategy. For example, if using Chef, you could set it in the parameter bags (or CustomJSON for OpsWorks). However, on its own, it won't allow anyone access to Vault. So someone who gained access to Chef wouldn't then be able to then go access Vault.

The user-id is NOT provided by Chef, and should be tied to specific machines. If your app is redundantly scaled across instances, each instance should have its own user-id. It doesn't really matter where this user-id originates from (though they give suggestions), but it should not come from the same place that deployed the app-id (ie, Chef). As they said, it can be scripted, just through other means. Whatever software you use to scale instances could supply user-ids to the instances/docker containers and authorize the user-id to the app-id. It can also be done by hand if you don't dynamically scale your instances. Every time a human adds a new instance, they create a new user-id, authorize it to the app-id, and supply it to the instance via whatever means best suites them.

Is this better than firewalling instances? Guess that depends. Firewalling doesn't restrict access to secrets in Vault (afaik), and if someone gained access to your instances, they could easily enter your Vault.

This way, it's hard for them to get all the pieces of the puzzle. To take it one step further, app-id also allows for CIDR blocks which you should use. If someone somehow got the app-id and user-id, they still couldn't access Vault without being on that network.

(Again, this is my interpretation after grokking the documentation the best I could)

weemen commented May 7, 2016

@Vanuan @mcmatthew great questions! @jaredm4 really thanks for this clarification, this will certainly help me. This is very usefull for everyone which is looking to a more practical implementation!! If I have time some where the upcoming two weeks then Ill try again!

sanmai-NL commented May 18, 2016 edited


Accessible by any proces in the container, thus easily "leaked"

Can you support this claim? Non-privileged processes cannot access the environment variables of non-parent processes. See


Environment variables set for the container (via --env or --env-file) are accessible by any process in the container.


Of course, since they are children of the entry point process. It's the job of that process, or you in case it's e.g. a shell, to unset the secret environment variables as soon as possible.

What is more relevant is whether processes with a different user ID other than 0 can access these environment variables inside and/or outside the container. This shouldn't be the case either, when the software you use inside the container properly drops privileges.


I know it's off topic but has anyone else noticed that this issue has been active for almost a full year now! Tomorrow is its anniversary. ๐Ÿ‘

davibe commented Jun 2, 2016 edited

Would it be possible for a container process to read env variables in process memory and then to un-set them (in the environment) ? Does this fix most of run-time security concerns ?

@alee alee added a commit to comses/miracle that referenced this issue Jun 21, 2016
@alee alee update secrets documentation
- see docker/docker#13490 for more context

@davibe the problem with that is that if the container or its process(es) restarts, those env vars are then gone, with no way to recover them.

davibe commented Jun 28, 2016 edited

I tried but it looks like env vars are still there after relaunch.

dade@choo:~/work/grocerest(master)$ cat test.js
console.log("FOO value: " + process.env.FOO);
console.log("FOO value after delete: " + process.env.FOO);

dade@choo:~/work/grocerest(master)$ docker run --name test -it -e FOO=BAR -v $(pwd):/data/ node node /data/test.js
FOO value: BAR
FOO value after delete: undefined

dade@choo:~/work/grocerest(master)$ docker restart test

dade@choo:~/work/grocerest(master)$ docker logs test
FOO value: BAR
FOO value after delete: undefined
FOO value: BAR
FOO value after delete: undefined

maybe docker-run is executing my thing as a child of bash ? I think it should not..

sanmai-NL commented Jun 28, 2016 edited


Yajo commented Jul 12, 2016

I think the main problem/feature in all this is that you log into Docker as root, thus anything you put inside a container can be inspected, be it a token, a volume, a variable, an encryption key... anything.

So one idea would be to remove sudo and su from your container and add a USER command before any ENTRYPOINT or CMD. Anybody running your container should now get no chance to run as root (if I'm not wrong) and thus you could now actually hide something from him.

Another idea (best IMHO) would be to add the notion of users and groups to the Docker socket and to the containers, so that you could tell GROUP-A has access to containers with TAG-B, and USER-C belongs to GROUP-A so it has access to those containers. It could even be a permission per operation (GROUP-A has access to start/stop for TAG-B, GROUP-B has access to exec, GROUP-C has access to rm/inspect, and so on).


After researching this for a few hours, I cannot believe that there seems to be no officially recommended solution or workaround for build-time secrets, and something like seems to be the only viable option for build-time secrets (short of squashing the whole resulting image or building it manually in the first place). Unfortunately is quite specific to ssh keys, so off I go to try to adapt it for hosting git https credential store files as well...

bacoboy commented Jul 30, 2016

After what seems like forever (originally I heard it was slated for Q4 2015 release), AWS ECS seems to have finally come thru on their promise to bring IAM roles to docker apps. Here is the blog post as well.

Seems like this combined with some KMS goodness is a viable near term solution. In theory you just have to make the secrets bound to certain principals/IAM roles to keep non-auth roles from asking for something they shouldn't and leave safe storage to KMS.

Haven't tried it yet,but its on my short list...

Kubernetes also seems to have some secrets handling that reminds me a lot of Chef encrypted databags.

I understand this isn't the platform-indepentant OSS way that is the whole point of this thread,
but wanted to throw those two options out there for people playing in those infrastructure spaces who need something NOW


I just ran across something that might help in this regard: #13587

This looks like it is available starting with docker v1.10.0, but I hadn't noticed it till now. I think the solution I'm leaning toward at this point is using to store and retrieve the secrets, storing them inside the container in a tmpfs file system mounted to /secrets or something of that nature. With the new ECS feature enabling IAM roles on containers, I believe I should be able to use vault's AWS EC2 auth to secure the authorization to the secrets themselves. (For platform independent I might be inclined to go with their App ID auth.)

In any case, the missing piece for me was where to securely put the secrets once they were retrieved. The tmpfs option seems like a good one to me. The only thing missing is that ECS doesn't seem to support this parameter yet, which is why I submitted this today: aws/amazon-ecs-agent#469

All together that seems like a pretty comprehensive solution IMHO.


@CameronGo, thanks for the pointer. If I understand correctly this can't be used at build fine though, or can it?


@NikolausDemmel sorry yes, you are correct. This is only a solution for run time secrets, not build time. In our environment, build time secrets are only used to retrieve code from Git. Jenkins handles this for us and stores the credentials for Git access. I'm not sure the same solution addresses the needs of everyone here, but I'm unclear on other use cases for build time secrets.


Jenkins handles this for us and stores the credentials for Git access.

How does that work with docker? Or do you not git clone inside the container itself?

wpalmer commented Aug 18, 2016

After reading through this issue in full, I believe it would benefit immensely from being split into separate issues for "build-time" and "run-time" secrets, which have very different requirements

kozikow commented Aug 23, 2016 edited

If you are like me and you come here trying to decide what to do right now, then FWIW I'll describe the solution I settled on, until something better comes around.

For run-time secrets I decided to use This only works if you use kubernetes. Otherwise vault looks ok. Anything secret either in generated image or temporary layer is a bad idea.

Regarding build-time secrets - I can't think of other build-time secrets use case other than distributing private code. At this point, I don't see better solution than relying on performing anything "secret" on the host side, and ADD the generated package/jar/wheel/repo/etc. to the image. Saving one LOC generating the package on the host side is not worth risking exposing ssh keys or complexity of running proxy server as suggested in some comments.

Maybe adding a "-v" flag to the docker build, similar to docker run flag could work well? It would temporarily share a directory between host and image, but also ensure it would appear empty in cache or in the generated image.

sagikazarmark commented Aug 23, 2016 edited

I am currently working on a solution using Vault:

  1. Builder machine has Vault installed and has a token saved locally
  2. When build starts, the builder machine requests a new temporary token only valid for minutes (based on the build, 1h would even be acceptable)
  3. Injects the token as build arg
  4. Docker image also has Vault installed (or installs and removes it during the build) and using this token it can fetch the real secrets

It is important the the secrets are removed within the same command, so when docker caches the given layer there are no leftovers. (This of course only applies to build time secrets)

I haven't build this yet, but working on it.


Somewhat related to @kozikow 's comment: "Regarding build-time secrets - I can't think of other build-time secrets use case other than distributing private code."

Maybe not a build time secret specifically, but I have a use-case need for (securing) a password during build-time in a Dockerfile in order to allow for an already-built artifact to be downloaded via a RUN curl command. The build-time download requires user credentials to authenticate in order to grab the artifact - so we pass the password as an environment variable in the Dockerfile right now (we're still in Dev). Builds are happening behind the scenes automatically, as we use OpenShift, and environment variables in the Dockerfile are output to logs during the build, like any docker build command. This makes the password visible to anyone that has access to the logs, including our developers. I've been desperately trying to figure out a way to send the password so that it can be used during the docker build, but then not have the password output to logs or end up being in any layers.

I also second what @wpalmer said about breaking this thread into run-time and build-time.

gtmtech commented Aug 24, 2016

I think it might be worthwhile defining some tests for whatever (runtime) secret mechanism is come up with by anyone. Because there are a lot of people on this thread that are advocating for very weak security.

As a start I suggest:

  • The secret does not show up in docker inspect
  • After process 1 has been started, the secret is not available within any file accessible from the container (including volume mounted files)
  • The secret is not available in /proc/1/cmdline
  • The secret is transmitted to the container in an encrypted form

Any solution suggested above that violates one of these is problematic.

If we can agree on a definition of what behaviour a secret should follow, then at least that will weed out endless solutions that are not fit for purpose.


@gtmtech great suggestions :)

After process 1 has been started, the secret is not available within any file accessible from the container (including volume mounted files)

I'm not sure I agree with this. While I do agree it should only be accessible from the container (in memory ideally) there are several cases where an application needs time to "start" and not have the files removed out from under it. I think something in memory for the duration of the container run (removed upon stop) is a bit better approach.


I'd add to the list of run-time requirements:

  • Container authentication/authorization when bootstrapping the first secret.

For instance, Vault provides for authorization with the AppRole Backend but is open-ended regarding how containers identify themselves.

Nick Sullivan presented on Clouflare's PAL project a few weeks ago, promising to open source it soon, which should provide one potential answer to the authentication question using docker notary.

mixja commented Sep 12, 2016 edited

From an application's perspective there are three ways of dealing with this:

  1. Get a secret from an environment variable.
  2. Get a secret from a file.
  3. Get a secret from another system.

#1 and #2 above are generally the most common because the majority of applications support this mechanism. #3 is probably more ideal as it leaves the least "crumbs", but the application has to be developed specifically for this and often still has to have a credential to get the secret.

Docker is all about versatility and supporting a wide variety of use cases. On this basis 1. and 2. are most appealing from an application's view, despite the fact they both leave "crumbs" on the system.

One common approach I certainly use is to inject secrets via an entrypoint script (e.g. use a tool like credstash or plain KMS in AWS and combine with IAM roles). In this regard you actually do #3 above in the entrypoint script, and either do #1 (set an environment variable) or #2 (write to a file). This approach is dynamic and for #1 (environment variables), doesn't expose credentials in docker logs or docker inspect.

The nice thing about the entrypoint approach is you are separating the concerns of secrets management from the application.

This is an area where Docker could add functionality to avoid having to use roll your own entrypoint scripts. Docker loves plugins and could provide a hook into the container lifecycle where it could support "secrets" provider plugins, which essentially perform the function of a manual entrypoint script and inject secrets into the container (via internal environment variable or file). So you could have a Hashicorp Vault secrets provider, an AWS KMS secrets provider etc. Docker perhaps could have it's own provider based using RSA encryption (via digital certs). This whole concept is loosely similar to Kubernetes concept of secrets, which presents secrets on the container file system.

Of course there's the complexity of how do you authorize access to the secrets provider, which is a problem you face today regardless. With Hashicorp you might issue and pass a one-time/time-limited token for auth, with AWS it's IAM roles, with the Docker RSA encryption approach I mentioned, it might just be passing secrets encrypted using the Docker Engine public certificate.

o6uoq commented Sep 12, 2016

This thread is great. I hope we see more threads like this one where people from the community and all walks of profession are able to share their experiences, thoughts and solutions.

The "secret zero" issue is a tricky one. Build-time or run-time? Both have their pro's and con's, and obvious security measures and flaws (and hacks and work arounds!).

That said, I've been thinking a lot about how the management of a pass/key comes down to the application and/or service.

Something we will be working on in the coming months is to build a shared, global configuration backing service via key/value pairs, distributed by Consul and made available as an environment variable (or injected if using environment variables is not supported). This only supports your insecure values. For secure, we will move to Vault and treat it like a backing service - much like a database or any other dependency.

Code, config and secrets will be provided via backing service(s). In this case, we use Stash, Consul and Vault. As long as the dependency is up, so is the ability to pull config and secrets as needed.

I've not seen this as a solid solution anywhere, hence I'm posting about it here. But to bring it back to the purpose of this thread, it's one approach we are going to experiment with to get around the Docker/secret issue. We will build applications which support this natively, rather than relying on the frameworks and platforms around them in which they run.

agilgur5 commented Sep 21, 2016 edited

With regard to build-time secrets, Rocker's MOUNT directive has proven to be useful for creating transient directories and files that only exist at build-time. Some of their templating features may also help in this situation but I haven't thoroughly used those yet.

I'd love to see this functionality implemented as a Builder plugin in Docker core (as well as some of the other useful features Rockerfiles have)!

@gajowi gajowi referenced this issue in Unidata/thredds-docker Oct 4, 2016

dropped ssl support #74

barrystaes commented Oct 18, 2016 edited

I see all 4 proposals currently in OP are about secret storage ๐Ÿ™

I'd say dockers should facilitate passing a secret/password to a docker instance but that storing/managing these secrets is (and should be) out of the scope of docker.

When passing a secret i'd say a run parameter is almost perfect except being that this is usually logged. So i'd narrow this down to a non-plaintext parameter feature. An approach would be to use encryption with keys generated per docker instance.

As for how to manage secrets, i'd say anything that the user wants, from a homebrew bash script to integration by software like Kubernetes.

binarytemple commented Oct 18, 2016 edited

What's wrong with just implementing MOUNT like rocker mounts as @agilgur5 remarked earlier? I can't believe this debate has gone so long that a team has had to effectively fork the docker build command in order to satisfy this really easy use case. Do we need another HTTP server in the mix? KISS.


I spent so many hours around this subjet ...

For now, the best way I found to manage secrets during build phase is building within two step, so two docker files. Here a good example.

[Habitus] ( seems to be another option, but in my case, I do not want to add another tools mainly because I would like the build proccess on CI server AND on user's computer keep simple / same .

And what about docker-in-docker (dind) way ?


Here an example of two steps build with dind like I just talked above :

Feel free to comment ...


Interesting. Kind of reminds me of how OpenShift does builds.

It looks to me like you're passing the password at the command line. Is there any way around that?


Note that there's a work-in-progress PR for build-time secrets here; #28079 (runtime secrets for services will be in docker 1.13, see #27794)


@thaJeztah :
About the #28079, I'm a little bit pessimistic when I saw so many PR around this subject failed during the last two years ...
I doesn't want to have swarm as a dependency. Part of my customers use another cluster orchestrator.

I don't understand what you mean?
1/ Passwords was passed to the "container builder" which is not the final image. This builder do a docker build and produce an image based on the Dockerfile.realease. There are no secrets stored into this final image's history.
2/ Feel free to use docker-compose (example) if you doesn't want passing the password to the command line


@BenoitNorrin I think it may be expanded to non-swarm in future, but @diogomonica may know more on that


Sounds like it:

This is currently for Swarm mode only as the backing store is Swarm and as such is only for Linux. This is the foundation for future secret support in Docker with potential improvements such as Windows support, different backing stores, etc.

hisapy commented Jan 3, 2017

A think a solution would be to encrypt some parts of information passed from a docker-compose file.

For example, run docker inspect and the encrypted information should be displayed/marked as encrypted. Then docker inspect --encryption-key some_key_file would show all the encrypted info, unencrypted.

On the other hand, inside the containers apps should be able to implement different mechanism to access and decrypt this encrypted info for their use.

I think encryption is the key :)


Since I didn't see it mentioned, here's another good article about handling secrets in AWS ECS:

@rolweber rolweber referenced this issue in goldmann/docker-squash Jan 13, 2017

Password getting revealed in docker history #138


There's a new "docker secret" command in Docker 1.13. This issue should be able to be closed when the documentation for that feature is adequate to the use cases mentioned here.

mixja commented Jan 22, 2017

The docker secret command looks to only apply currently to Docker Swarm (i.e. docker services) so is not currently viable for generic Docker containers.

shane-axiom commented Jan 22, 2017 edited

Also docker secret only manages run time secrets, not build time secrets.


@binarytemple Everyone wants all the features right now. If stuff isn't ready then it's just not ready. Limiting the scope of a new feature is definitely not a bad thing as even with a limited scope there's always room for improvement.

If someone is really gung-ho about getting a feature in then they should talk to a maintainer(s) on how they can contribute the work for that.

bacoboy commented Jan 23, 2017

I thought the same thing as @mixja in that the secret command only helps swarm users is not a more general solution (like they did with attaching persistent volumes). How you manage your secrets (what they are and who has access to them) is very system dependent and depends on which bits of paid and/or OSS you cobble together to make your "platform". With Docker the company moving into providing a platform, I'm not surprised that their first implementation is swarm based just as Hashicorp is integrating Vault into Atlas -- it makes sense.

Really how the secrets are passed falls outside the space of docker run. AWS does this kind of thing with roles and policies to grant/deny permissions plus an SDK. Chef does it using encrypted databags and crypto "bootstrapping" to auth. K8S has their own version of what just got released in 1.13. I'm sure mesos will add a similar implementation in time.

These implementations seem to fall into 2 camps.

  1. pass the secret via volume mount that the "platform" provides or (chef/docker secret/k8s
  2. pass credentials to talk to an external service to get things at boot (iam/credstash/etc)

I think I was hoping to see something more along the lines of the second option. In the first option, I don't think there is enough separation of concerns (the thing doing the launching also has access to all the keys), but this is preference, and like everything else in system building, everybody likes to do it different.

I'm encouraged that this first step has been taken by docker and hope that a more general mechanism for docker run comes out of this (to support camp #2) -- which sadly means I don't think this thread's initial mission has been met and shouldn't be closed yet.


really simple yet very effective design

@bacoboy , @mixja - single node swarm and a single container service is not so bad
docker swarm init , docker service create replica=1

to me it is logical that docker swarm will be the default for running containers/services from now on.

wpalmer commented Jan 30, 2017

Am I correct in thinking that this new swarm-based proposal only impacts run-time secrets? I really don't see any need for special handling of run-time secrets, as there are already so many ways to get secrets into a running container.

build-time secrets are important, and as far as I know, this proposal does not address them.


To inject build-time secrets, we can now use docker build --squash to do the following safely:

COPY ssh_private_key_rsa /root/.ssh/id_rsa
RUN git pull ...
RUN rm -rf /root/.ssh/id_rsa

The --squash flag will produce a single layer for the entire Dockerfile: there will be no trace of the secret.

--squash is available in docker-1.13 as an experimental flag.


@hmalphettes This means you miss out on the benefits of shared lower layers between builds.

cpuguy83 commented Feb 1, 2017

This is definitely not the intention of squash. Id still be very careful about adding secrets like this.

@zoidbergwill lower layers are still shared.

ehazlett commented Feb 1, 2017 edited

I agree 100% with @cpuguy83. Relying on a build time flag to keep out secrets would be pretty risky. There was a proposal PR for build time (#30637) I'll work on a rebase to get more feedback.

@Richard-Mathie Richard-Mathie referenced this issue in dinoflux/storm_swarm Feb 9, 2017
@jpalanco jpalanco Added documentation 1353217
timka commented Feb 16, 2017

@wpalmer If you have automated image builds, your tooling should know how to get build-time secrets.

For instance, you may want to keep your build-time secrets in an Ansible encrypted vault baked into an image and grant containers running from that image access to the run-time secret that keeps your vault password.



Why do we keep confusing build-time secrets with runtime secrets? There are many good ways already for docker (or related tools like kubernetes) to provide the runtime secrets. The only thing really missing is build-time secrets. These secrets are not used during run time, they are used during install time, this could be internal repositories for example. The only working way I have seen in this and related topics (but also advised against it), is exposing an http server to the container during build time. The http server approach makes things quiet complicated to actually get to those secrets.


@pvanderlinden You can also do that with two steps building.
Here an example :


@timka as mentioned it's not desirable to bake credentials into the image as that poses a security risk. Here is a proposal for build time secrets: #30637


@BenoitNorrin Not sure how that would in my (and others) use case.
The packages which needs to be installed are already build when I start the docker build process. But the docker build will need to install these packages, it will need access to an internal anaconda repository and/or pypi server (python). The locations and passwords are private ofcourse.
Looks like #30637 is another attempt, hopefully this will end up in docker!

wpalmer commented Feb 16, 2017

@timka the first half of your message seems to mention build-time secrets, but the second half explicitly talks about run-time secrets. Run-time secrets are simple. My current "solution" for build-time secrets is to pre-run, as a completely separate step, a container which fetches private data using a run-time secret. Another step then merges this into the tree, before running a regular docker build command.

The alternative, if build-time secrets were a standard feature, would be to run these steps within the Dockerfile.

My tooling does know how to run these steps automatically, but I needed to bake this myself, which is somewhat absurd for such a common desire.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment