Proposal: The Docker Vault #10310

Closed
cyphar opened this Issue Jan 23, 2015 · 34 comments

Comments

Projects
None yet
@cyphar
Contributor

cyphar commented Jan 23, 2015

The Docker Vault

(aka The Art of Injecting Ethereal Data into Containers)

This proposal is a docs-based follow up to #6075 and #6697, with some
clarifications and improvements.

Purpose

The purpose for the "Docker vault" is to allow containers (both created in an
intermediate build step or created explicitly) to access data that is considered
to be "secret" while not allowing said data to be stored in images.

Essentially this makes the general purpose of the Docker vault to allow users to
inject arbitrary files into a container context while being assured that said
files will not be leaked into image layers if that container is committed.

This allows for certain use-cases to be accomplished with ease, such as:

  • Injecting keys or information required to generate a final image (which is
    allowed to be publicly available), while not storing said keys (which are
    not allowed to be publicly available) in the intermittent layers.
  • Injecting keys or other information required for the running of a Docker
    container (such as SSL private keys) when said keys are not allowed to be
    publicly available.

Overview

Data injected into a container using the Docker value will NOT (under any
circumstances) be saved when an image is created from the container. The data
is (to all intents and purposes) ethereal: you can only see it in a container
that has access to it, it cannot be saved into an image.

Containers can be given access to this data, and will be able to access this
data throughout their lifetime (i.e. purging files from the Docker value will
not remove it from running containers).

The data in the vault is conceptually stored in "boxes", where a "box" can be
considered a collection of files that are related in some fashion. Each box is
given a name, and containers are given access to specific boxes. An image can
hint what boxes it requires (just as it can hint the volumes it requires) and
boxes can be "aliased" in a container (you can tell a container that a box named
a is actually a box named b which the image expects).

Box names can consist of any characters except "/" and ":".

It is important to note that vault operations will NEVER affect a running
container, because a running container must remain consistent throughout its
lifetime.

Files in the Docker vault persist across restarts of the Docker daemon.

Containers access files they have been given access to by accessing the files
in /run/vault/<box>/.... Boxes can store directories too. All files and
directories in /run/vault/ are owned by UID and GID 0.

Modifications of any kind to a box inside a container will not be reflected
in the vault's stored boxes. To this end, boxes are mounted ro inside a
container (and remounting them rw will still not allow you to modify
the vault's stored boxes inside a container).

Usage

There are several ways to access the Docker vault functionality:

  • docker vault ...: add, query, and delete data from the Docker vault.
  • docker create --inject: inject files from the Docker vault into a container.
  • docker run --inject: inject files from the Docker vault into a container.
  • docker build --inject: inject files from the Docker vault into the
    intermediate containers during the build.
  • The Dockerfile syntax explained below.

docker vault

This subcommand has three classes of subcommands, with 6 subcommands in all:

Management (creates and destroys boxes):

  • docker vault create [options] <box>...
  • docker vault destroy [options] <box>...

Modification (adds files to and removes files from a box):

  • docker vault add [options] <box> <file>...
  • docker vault remove [options] <box> <file>...

Querying (lists information or accesses data inside boxes or the boxes
themselves):

  • docker vault list [options] [<box>...]
  • docker vault read [options] <box>[:<file>]...

It also has the following aliases (for the purposes of Unix-like simplicity):

  • docker valut ls => docker vault list
  • docker vault rm => docker vault remove
  • docker vault cat => docker vault read

NOTE: These could just be used as the proper commands and the long-form
ones ignored...

It is very important to note that NONE of the docker vault subcommands
will affect running containers (whether or not they are using a box affected by
the vault operation). Containers use copies of vault data, not references to the
vault data itself -- in order to maintain consistency of a single container's
lifetime.

docker vault create [options] <box>...

This subcommand creates a new box called "<box>" with no files inside it. If
a box with the given name already exists, this command will emit an error and do
nothing.

Options:

  • None.
docker vault destroy [options] <box>...

This subcommand obliterates the box called "<box>" and any files stored
within it.

Options:

  • -f, --force: ignore errors if the given box name does not exist.
docker vault add [options] <box> <file>...

This subcommand adds the given list of files to the given box. The file names
are preserved when adding the files to the box.

If the box does not exist, the command will emit an error and do nothing.

If one of the paths in the given list does not exist, the command will emit an
error and continue execution.

If the given path points to a symlink, the symlink itself is copied verbatim.

If one of the path components in the path is a symlink, the symlink is followed
as though the box root was the root filesystem (it is scoped to the box).

If the path has some directory components, these will be reflected when the box
is injected into a container. In other words, boxes can store directories.
However, the path will be sanitised, so relative paths (../a/b/c) and absolute
paths /a/b/c will not be reflected. In both cases the paths would be precisely
identical to a/b/c.

If the given path points to a directory, the command will emit an error and
continue execution. If the -r flag is set, then all of the files and
directories in that directory are also added to the box (as if their full paths
were also included in the command). If a directory with the given name already
exists, then the directories are merged.

Options:

  • -r, --recursive: recurses directories given, adding all of the contents of
    the directory to the box in addition to the directory itself.
docker vault remove [options] <box> <file>...

This subcommand removes the specified files from the given box.

If the path doesn't exist inside the box, then the command will emit an error
and continue execution.

If one of the path components in the path is a symlink, the symlink is followed
as though the box root was the root filesystem (it is scoped to the box).

If the path is a directory, then the command will emit an error and continue
execution unless the -r flag is set. If the -r flag is set, then the
directory and its contents are recursively removed.

Options:

  • -r, --recursive: recurses directories given, removing all of the contents
    of the directories from the box in addition to the directory itself.
docker vault list [options] [<box>...]

This subcommand lists all files and directories stored inside the given boxes.

If no boxes are given, docker vault will list the files in every box stored in
the Docker vault.

If a given box does not exist, the command will emit an error and continue
execution.

Options:

  • -b, --boxes: only print the name of each box, not their contents.
  • -f FORMAT, --format=FORMAT: formats each line with the given format
    string.
  • -r PATTERN, --pattern=PATTERN: only print entries where the file
    paths match the given regular expression.
docker vault read [options] <box>:<file>...

This subcommand reads the contents of each file specified in the command line
and prints them to stdout. No information is printed about which box or file
the data came from.

If a path of box doesn't exist, the command emits an error and continues
execution.

If one of the path components in the path is a symlink, the symlink is followed
as though the box root was the root filesystem (it is scoped to the box).

If a path is a directory, the command emits an error and continue execution
unless the -r flag is set. If the -r flag is set, then the directory is
recursed and all of its contents are

Options:

  • None.

docker create [--inject <box>:[<alias>]]...

This option to docker create allows you to inject boxes into a container on
its creation.

If an alias is specified, then the box is injected into /run/vault/<alias>.
Otherwise, the box is injected into /run/vault/<box>.

docker run [--inject <box>:[<alias>]]...

This option to docker run allows you to inject boxes into a container on its
creation.

If an alias is specified, then the box is injected into /run/vault/<alias>.
Otherwise, the box is injected into /run/vault/<box>.

docker build [--inject <box>:[<alias>]]...

This option to docker build allows you to inject boxes into each of the
intermediate build containers during image creation. These boxes will (of
course) not be stored in the resultant image.

If an alias is specified, then the box is injected into /run/vault/<alias>.
Otherwise, the box is injected into /run/vault/<box>.

Dockerfile

Images can hint what boxes they expect in order to run (much like how volume
hinting works). If a hint is not fulfiled, then an empty box is mounted
instead.

Essentially the syntax has two forms (to mirror the VOLUME instruction):

BOX <box>
BOX ["<box>"...]

The first format is a legacy format, only allowing for one box name to be
specified. The second is the newer format, and it accepts a JSON array of
boxes.

Internals

The following documents the following internals:

  • Changes to the RESTful API.
  • Changes to the container and image information.

RESTful API Changes

Several new endpoints will be added as a result of this functionality:

  • PUT /vault/<box>/
  • DELETE /vault/<box>/
  • PUT /vault/<box>/<path>
  • DELETE /vault/<box>/<path>
  • GET /vault/
  • GET /vault/<box>/
  • GET /vault/<box>/<path>

And several modified by this functionality:

  • POST /containers/<name>/create
  • POST /build

All of which are readibly apparent if you look at the docs for the command-line.

Container and Image Changes

Basically, both the container and image structures need to be updated to store:

  • Hinted boxes (images).
  • Injected boxes (containers).

Both of which are readily apparent if you look at the docs for the command-line.

/cc @shykes (this is a long one)

@cyphar cyphar changed the title from Proposal: Docker Vault (The Art of Injecting Ethereal Data into Containers) to Proposal: The Docker Vault (The Art of Injecting Ethereal Data into Containers) Jan 23, 2015

@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jan 23, 2015

Contributor

I like the idea of this being separate from the volumes, and using the direct mount points into the container. Forcing all secrets to be files, and to have a well known /run/ path also fits in with several other proposals and simplifies the interaction with the processes. It does mean the person providing the box and the image have to be in agreement on directory structure - so you have to change your images to take advantage of boxes. Did you consider letting the container specify where the boxes are mounted?

Contributor

smarterclayton commented Jan 23, 2015

I like the idea of this being separate from the volumes, and using the direct mount points into the container. Forcing all secrets to be files, and to have a well known /run/ path also fits in with several other proposals and simplifies the interaction with the processes. It does mean the person providing the box and the image have to be in agreement on directory structure - so you have to change your images to take advantage of boxes. Did you consider letting the container specify where the boxes are mounted?

@cyphar

This comment has been minimized.

Show comment
Hide comment
@cyphar

cyphar Jan 24, 2015

Contributor

@smarterclayton IMO it's much cleaner that people make their own symlinks to /etc/vault/* because then it's possible to nicely implement the ethereal magic (by mounting a tmpfs onto /etc/vault, copying the data and then unmounting from the host mountpoint leaving the container mountpoint). You could probably also do it per-box and then allow arbitrary injection paths but that just sounds ... dangerous to me. Also, it seems to me that allowing people to randomly mount boxes in random locations is a bad idea... But if people want that feature, it's totally doable.

Contributor

cyphar commented Jan 24, 2015

@smarterclayton IMO it's much cleaner that people make their own symlinks to /etc/vault/* because then it's possible to nicely implement the ethereal magic (by mounting a tmpfs onto /etc/vault, copying the data and then unmounting from the host mountpoint leaving the container mountpoint). You could probably also do it per-box and then allow arbitrary injection paths but that just sounds ... dangerous to me. Also, it seems to me that allowing people to randomly mount boxes in random locations is a bad idea... But if people want that feature, it's totally doable.

@cyphar cyphar changed the title from Proposal: The Docker Vault (The Art of Injecting Ethereal Data into Containers) to Proposal: The Docker Vault Jan 24, 2015

@cyphar

This comment has been minimized.

Show comment
Hide comment
@cyphar

cyphar Jan 25, 2015

Contributor

/ping @alexlarsson (since you were the main proponent of docker secrets last time)

Contributor

cyphar commented Jan 25, 2015

/ping @alexlarsson (since you were the main proponent of docker secrets last time)

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jan 25, 2015

Member

I like the idea; not sure what the implications are of allowing to specify a BOX in a Dockerfile. If someone was able to guess the name of one of my boxes and submit that boximage to the registry, then it will obtain access to my "secrets" if that image was ever run on my host?

Portability could be an issue (thinking of deploying images to swarm; I need to have all boxes in place before deploying, right?)

Just some initial things that came up while reading. Will read again and give it more thought.

Member

thaJeztah commented Jan 25, 2015

I like the idea; not sure what the implications are of allowing to specify a BOX in a Dockerfile. If someone was able to guess the name of one of my boxes and submit that boximage to the registry, then it will obtain access to my "secrets" if that image was ever run on my host?

Portability could be an issue (thinking of deploying images to swarm; I need to have all boxes in place before deploying, right?)

Just some initial things that came up while reading. Will read again and give it more thought.

@cyphar

This comment has been minimized.

Show comment
Hide comment
@cyphar

cyphar Jan 25, 2015

Contributor

@thaJeztah No, BOX is just saying "I expect to have a box with this name". Just like how VOLUME doesn't automatically give you access to the host. I might expand on this a bit in the proposal.

@smarterclayton Also, boxes are directories that can have complete directory trees inside them. They're not just single files.

Contributor

cyphar commented Jan 25, 2015

@thaJeztah No, BOX is just saying "I expect to have a box with this name". Just like how VOLUME doesn't automatically give you access to the host. I might expand on this a bit in the proposal.

@smarterclayton Also, boxes are directories that can have complete directory trees inside them. They're not just single files.

@ncoghlan

This comment has been minimized.

Show comment
Hide comment
@ncoghlan

ncoghlan Jan 27, 2015

From a user perspective, this sounds quite attractive - no separate daemon to worry about, and access to secrets on the host is controlled using familiar file system permission mechanisms, and via volume access from inside containers.

It also decouples transfer of secrets from the orchestration layer - if a container being spun up needs a particular secret, then the orchestration layer will need to make sure the secret is in place on the container host first, but the container itself doesn't need to care how the secret got there.

From a user perspective, this sounds quite attractive - no separate daemon to worry about, and access to secrets on the host is controlled using familiar file system permission mechanisms, and via volume access from inside containers.

It also decouples transfer of secrets from the orchestration layer - if a container being spun up needs a particular secret, then the orchestration layer will need to make sure the secret is in place on the container host first, but the container itself doesn't need to care how the secret got there.

@TomasTomecek

This comment has been minimized.

Show comment
Hide comment
@TomasTomecek

TomasTomecek Jan 29, 2015

Contributor

Couple questions: how (and where) is vault stored on disk? Is the box directly mounted in the container? If so, what happens if I inject box a into a container and destroy it right away? If it's mounted it seem that containers are capable of changing a box (judging from the 755 & 644 perms).
Will I able to figure out if my image x was built with some box? What will be written in a build log?

Nitpick: having two commands, destroy and remove, is really confusing, IMO it should be remove-file.

Anyway, I really like this idea! I hope this will make it to 1.6 (together with squashing #9591).

Contributor

TomasTomecek commented Jan 29, 2015

Couple questions: how (and where) is vault stored on disk? Is the box directly mounted in the container? If so, what happens if I inject box a into a container and destroy it right away? If it's mounted it seem that containers are capable of changing a box (judging from the 755 & 644 perms).
Will I able to figure out if my image x was built with some box? What will be written in a build log?

Nitpick: having two commands, destroy and remove, is really confusing, IMO it should be remove-file.

Anyway, I really like this idea! I hope this will make it to 1.6 (together with squashing #9591).

@gdm85

This comment has been minimized.

Show comment
Hide comment
@gdm85

gdm85 Jan 29, 2015

Contributor

@cyphar how would you manage attempted operations at build time on paths provided by the BOX? or they would all fail by design?

Contributor

gdm85 commented Jan 29, 2015

@cyphar how would you manage attempted operations at build time on paths provided by the BOX? or they would all fail by design?

@ncdc

This comment has been minimized.

Show comment
Hide comment
@ncdc

ncdc Jan 29, 2015

Contributor

How would you feel about being able to flag certain box files as containing a set of key=value environment variables, and having Docker add them to the runtime environment of the container when creating and running it? Also, we couldn't have these env vars persisted when committing the container - they'd need to be separate from the container config.

Contributor

ncdc commented Jan 29, 2015

How would you feel about being able to flag certain box files as containing a set of key=value environment variables, and having Docker add them to the runtime environment of the container when creating and running it? Also, we couldn't have these env vars persisted when committing the container - they'd need to be separate from the container config.

@gdm85

This comment has been minimized.

Show comment
Hide comment
@gdm85

gdm85 Jan 29, 2015

Contributor

@ncdc using environment variables is a reiteration of a bad pattern. They eventually go off-sync after a restart and pollute the environment for all processes

Contributor

gdm85 commented Jan 29, 2015

@ncdc using environment variables is a reiteration of a bad pattern. They eventually go off-sync after a restart and pollute the environment for all processes

@ncdc

This comment has been minimized.

Show comment
Hide comment
@ncdc

ncdc Jan 29, 2015

Contributor

@gdm85 that may be true, but I'm thinking about images whose executable processes expect to use environment variables and trying to minimize necessary image modifications.

To play devil's advocate, it's probably trivial to create a script to be the image's Command that could read the files from the box and turn them into env vars before invoking the actual executable.

Contributor

ncdc commented Jan 29, 2015

@gdm85 that may be true, but I'm thinking about images whose executable processes expect to use environment variables and trying to minimize necessary image modifications.

To play devil's advocate, it's probably trivial to create a script to be the image's Command that could read the files from the box and turn them into env vars before invoking the actual executable.

@cyphar

This comment has been minimized.

Show comment
Hide comment
@cyphar

cyphar Jan 30, 2015

Contributor

@gdm85 If a BOX requirement isn't fulfilled, then an empty box is mounted (from the proposal). If you're talking about a build process modifying a box inside a container, I'm still unsure if we should mount boxes ro (which IMO is the most logical way to mount it) or if we should just make changes to a mounted box not modify the box itself (which is what would happen with the implementation I'm working on right now).

@ncdc That is a completely separate proposal to this. The Docker vault is only concerned with storage of ethereal data on a container filesystem. Also, I personally don't like using environment variables from a Dockerfile (it's always seemed unsafe IMO). But in either case, you can easily do what you want with the current Docker vault proposal -- store the environment variables in a box and then source the file or read it as a config (however you want to do it).

Contributor

cyphar commented Jan 30, 2015

@gdm85 If a BOX requirement isn't fulfilled, then an empty box is mounted (from the proposal). If you're talking about a build process modifying a box inside a container, I'm still unsure if we should mount boxes ro (which IMO is the most logical way to mount it) or if we should just make changes to a mounted box not modify the box itself (which is what would happen with the implementation I'm working on right now).

@ncdc That is a completely separate proposal to this. The Docker vault is only concerned with storage of ethereal data on a container filesystem. Also, I personally don't like using environment variables from a Dockerfile (it's always seemed unsafe IMO). But in either case, you can easily do what you want with the current Docker vault proposal -- store the environment variables in a box and then source the file or read it as a config (however you want to do it).

@cyphar

This comment has been minimized.

Show comment
Hide comment
@cyphar

cyphar Jan 30, 2015

Contributor

@TomasTomecek

[...] how (and where) is vault stored on disk? [...]

The storage of the vault is not part of this proposal (it's an implementation feature which doesn't actually affect the API -- you could conceivably even store the vault data in memory and serialise it disk and it still wouldn't change the API). However, my current implementation will create a vault at /var/lib/docker/vault and then the boxes will be stored in /var/lib/docker/vault/boxes/<boxname> with some vault information stored in /var/lib/docker/vault/vault.json.

[...] Is the box directly mounted in the container?
If so, what happens if I inject box a into a container and destroy it right away? [...]

When you inject a box, a copy is mounted into the container because that's the only way to effectively follow the "changing a box will not change container state which already have it mounted". And this answers your next question too: if you have a running container with a box mounted, no operations on the box will affect the container state.

[...] If it's mounted it seem that containers are capable of changing a box (judging from the 755 & 644 perms). [...]

Two things:

  1. I forgot to update the proposal, actually the permissions on a file in a box are the same permissions set on the file when you add it (whoops).
  2. I'm still unsure if we should mount boxes ro or if containers should be able to write to a box (but obviously writes to a box won't change the vault's stored boxes -- see above about mounting copies). In either case, I should probably add that to the proposal.

[...] Will I able to figure out if my image x was built with some box? What will be written in a build log? [...]

Given that you have to specify which boxes will fulfil the image requirements (or just which ones you'll add), I can probably add some information in docker inspect to let you find out which boxes the container has mounted.

[...] having two commands, destroy and remove, is really confusing, IMO it should be remove-file. [...]

create and destroy IMO sound like related commands. Same with add and remove. If you can come up with a nicer one-word term for destroy that maps to create, we could use that. IMO remove-file looks ... ugly (and I'd have to have add-file too).

Contributor

cyphar commented Jan 30, 2015

@TomasTomecek

[...] how (and where) is vault stored on disk? [...]

The storage of the vault is not part of this proposal (it's an implementation feature which doesn't actually affect the API -- you could conceivably even store the vault data in memory and serialise it disk and it still wouldn't change the API). However, my current implementation will create a vault at /var/lib/docker/vault and then the boxes will be stored in /var/lib/docker/vault/boxes/<boxname> with some vault information stored in /var/lib/docker/vault/vault.json.

[...] Is the box directly mounted in the container?
If so, what happens if I inject box a into a container and destroy it right away? [...]

When you inject a box, a copy is mounted into the container because that's the only way to effectively follow the "changing a box will not change container state which already have it mounted". And this answers your next question too: if you have a running container with a box mounted, no operations on the box will affect the container state.

[...] If it's mounted it seem that containers are capable of changing a box (judging from the 755 & 644 perms). [...]

Two things:

  1. I forgot to update the proposal, actually the permissions on a file in a box are the same permissions set on the file when you add it (whoops).
  2. I'm still unsure if we should mount boxes ro or if containers should be able to write to a box (but obviously writes to a box won't change the vault's stored boxes -- see above about mounting copies). In either case, I should probably add that to the proposal.

[...] Will I able to figure out if my image x was built with some box? What will be written in a build log? [...]

Given that you have to specify which boxes will fulfil the image requirements (or just which ones you'll add), I can probably add some information in docker inspect to let you find out which boxes the container has mounted.

[...] having two commands, destroy and remove, is really confusing, IMO it should be remove-file. [...]

create and destroy IMO sound like related commands. Same with add and remove. If you can come up with a nicer one-word term for destroy that maps to create, we could use that. IMO remove-file looks ... ugly (and I'd have to have add-file too).

@proppy

This comment has been minimized.

Show comment
Hide comment
@proppy

proppy Jan 30, 2015

Contributor

It think it would nice for the proposal to sum up previous discussions and detail why having a new type of object vault is preferable than introducing a new type/attribute of volume :vault with similar properties (allowed on build, constraint on the mount path, instantiated per container).

Esp. If #8484 land before this, one could imagine having additional command to do CRUD operation on volume content thru the remote API, similar to the nice REST API surface you are proposing.

Contributor

proppy commented Jan 30, 2015

It think it would nice for the proposal to sum up previous discussions and detail why having a new type of object vault is preferable than introducing a new type/attribute of volume :vault with similar properties (allowed on build, constraint on the mount path, instantiated per container).

Esp. If #8484 land before this, one could imagine having additional command to do CRUD operation on volume content thru the remote API, similar to the nice REST API surface you are proposing.

@TomasTomecek

This comment has been minimized.

Show comment
Hide comment
@TomasTomecek

TomasTomecek Jan 30, 2015

Contributor

@cyphar

the actual storage implementation

was just curious

mounting a box in a container

Copying the content of the box makes total sense to me. Therefore I don't think it's worth discussing whether it should be mounted ro -- it's up to container what it does with its own box instance.

destroy vs. remove

it was just a nitpick (how about remove-from and add-to)

Contributor

TomasTomecek commented Jan 30, 2015

@cyphar

the actual storage implementation

was just curious

mounting a box in a container

Copying the content of the box makes total sense to me. Therefore I don't think it's worth discussing whether it should be mounted ro -- it's up to container what it does with its own box instance.

destroy vs. remove

it was just a nitpick (how about remove-from and add-to)

@cyphar

This comment has been minimized.

Show comment
Hide comment
@cyphar

cyphar Jan 30, 2015

Contributor

@proppy I don't think that the Docker vault should be a special case of Docker volumes, just because the Docker vault allows you to store the data you want inside the Docker daemon and redistribute it to containers without having to worry about copies or consistency or whatever. In my opinion, it should be kept separate to volumes just because of the different purposes of both (given that vault boxes are copy-on-mount while volumes are just bind-mounts of host directories). But that's just my $0.02.

Contributor

cyphar commented Jan 30, 2015

@proppy I don't think that the Docker vault should be a special case of Docker volumes, just because the Docker vault allows you to store the data you want inside the Docker daemon and redistribute it to containers without having to worry about copies or consistency or whatever. In my opinion, it should be kept separate to volumes just because of the different purposes of both (given that vault boxes are copy-on-mount while volumes are just bind-mounts of host directories). But that's just my $0.02.

@proppy

This comment has been minimized.

Show comment
Hide comment
@proppy

proppy Jan 30, 2015

Contributor

@cyphar yes, I think that could be nice to sum up that point in the proposal description, esp. since I think this was also raised on the previous secrets proposal.

In my opinion, it should be kept separate to volumes just because of the different purposes of both (given that vault boxes are copy-on-mount while vaults are just bind-mounts of host directories)

s/while vaults/while volume/
If we think that some use cases, other than secret management, could find different types of volumes useful: :ro, :rw, :copy, :cow? maybe that's worth putting together into a separate proposal.

If such a feature existed (copy on mount and copy on write volumes), would vault rely on it?

Contributor

proppy commented Jan 30, 2015

@cyphar yes, I think that could be nice to sum up that point in the proposal description, esp. since I think this was also raised on the previous secrets proposal.

In my opinion, it should be kept separate to volumes just because of the different purposes of both (given that vault boxes are copy-on-mount while vaults are just bind-mounts of host directories)

s/while vaults/while volume/
If we think that some use cases, other than secret management, could find different types of volumes useful: :ro, :rw, :copy, :cow? maybe that's worth putting together into a separate proposal.

If such a feature existed (copy on mount and copy on write volumes), would vault rely on it?

@cyphar

This comment has been minimized.

Show comment
Hide comment
@cyphar

cyphar Jan 30, 2015

Contributor

@proppy If there's was a helper in the execdrivers to allow you to copy-on-mount (which I'm fairly sure is not a legit term) a box by bindmounting a tmpfs after copying the boxes into the tmpfs then unmounting the host mountpoint, then I would probably use it. The benefit of the Docker vault being separate to volumes will allow for Docker vault-specific features to be implemented in the future (such as ACLs or encryption when stored inside the Docker vault).

As an aside, I think that such options to volumes would be a pretty cool feature (and I'd love to write :cow in a docker subcommand 😉) and might be useful.

Contributor

cyphar commented Jan 30, 2015

@proppy If there's was a helper in the execdrivers to allow you to copy-on-mount (which I'm fairly sure is not a legit term) a box by bindmounting a tmpfs after copying the boxes into the tmpfs then unmounting the host mountpoint, then I would probably use it. The benefit of the Docker vault being separate to volumes will allow for Docker vault-specific features to be implemented in the future (such as ACLs or encryption when stored inside the Docker vault).

As an aside, I think that such options to volumes would be a pretty cool feature (and I'd love to write :cow in a docker subcommand 😉) and might be useful.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jan 30, 2015

Member

Brainstorming here (so it may not make sense);

What if boxes were implemented as images, kept in a separate image-store?

  • Using a box would create a container of the image (a "box-instance")
  • COW would be possible (containers use COW)
  • If COW is not wanted, run the container with -v /vault/boxname and using --volumes-from <box-instance> (the way volumes are implemented in docker; declaring a volume on run will copy the content of the container to the host)
  • Read-only would do --volumes-from <box-instance:ro>

Obviously, it should not be possible to commit the containers (box instances) and/or push the boxes to the registry. However it would open the possibility to have a dedicated (private) registry for storing boxes. That (private) registry could be used to automatically deploy boxes to the docker hosts, without having to (manually) copy the files.

Again, just brainstorming here..

Member

thaJeztah commented Jan 30, 2015

Brainstorming here (so it may not make sense);

What if boxes were implemented as images, kept in a separate image-store?

  • Using a box would create a container of the image (a "box-instance")
  • COW would be possible (containers use COW)
  • If COW is not wanted, run the container with -v /vault/boxname and using --volumes-from <box-instance> (the way volumes are implemented in docker; declaring a volume on run will copy the content of the container to the host)
  • Read-only would do --volumes-from <box-instance:ro>

Obviously, it should not be possible to commit the containers (box instances) and/or push the boxes to the registry. However it would open the possibility to have a dedicated (private) registry for storing boxes. That (private) registry could be used to automatically deploy boxes to the docker hosts, without having to (manually) copy the files.

Again, just brainstorming here..

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jan 30, 2015

Member

Had a short chat with @cyphar on IRC about my previous blurb. Images are probably too much moving parts, just for managing a few files. A registry to store boxes is something that could be considered in the future, (including ACLs?) but definitely not part of the initial implementation; keep it simple for now.

Giving it some more thought, I wonder if COW is really important for boxes; I expect secrets to be just a few files; would COW give advantages? (Might be overseeing things, so interested to hear if there are specific use cases for COW) 🐮

Member

thaJeztah commented Jan 30, 2015

Had a short chat with @cyphar on IRC about my previous blurb. Images are probably too much moving parts, just for managing a few files. A registry to store boxes is something that could be considered in the future, (including ACLs?) but definitely not part of the initial implementation; keep it simple for now.

Giving it some more thought, I wonder if COW is really important for boxes; I expect secrets to be just a few files; would COW give advantages? (Might be overseeing things, so interested to hear if there are specific use cases for COW) 🐮

@dreamcat4

This comment has been minimized.

Show comment
Hide comment
@dreamcat4

dreamcat4 May 7, 2015

Hello! Sorry, this proposal has confused me a bit. Were you guys aware the 'Vault' is actually the name of a standalone precompiled go binary? From hashicorp:

https://www.vaultproject.io/docs/install/index.html

It seems like this proposal duplicates some of that functionality. It seems (from their software model) that docker support aught to be implemented some kind of a 'backend' plugin written for the main vault program. To make it work in docker containers. Sorry I'm just a bit confused about whether or not this proposal was meant to integrate with that one. And if it is not, then why do we need to be duplicating a lot of the generic aspects of the secrets management functionality?

Hello! Sorry, this proposal has confused me a bit. Were you guys aware the 'Vault' is actually the name of a standalone precompiled go binary? From hashicorp:

https://www.vaultproject.io/docs/install/index.html

It seems like this proposal duplicates some of that functionality. It seems (from their software model) that docker support aught to be implemented some kind of a 'backend' plugin written for the main vault program. To make it work in docker containers. Sorry I'm just a bit confused about whether or not this proposal was meant to integrate with that one. And if it is not, then why do we need to be duplicating a lot of the generic aspects of the secrets management functionality?

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 May 7, 2015

Contributor

@dreamcat4 This proposal far outdates vaultproject, which is brand new.
Also agree, docker should support a common API to secret storage backends but should not itself implement one.

Contributor

cpuguy83 commented May 7, 2015

@dreamcat4 This proposal far outdates vaultproject, which is brand new.
Also agree, docker should support a common API to secret storage backends but should not itself implement one.

@dreamcat4

This comment has been minimized.

Show comment
Hide comment
@dreamcat4

dreamcat4 May 7, 2015

@cpuguy83 OK! On that note, [EDIT] have created a new issue on hashicorp's vault about Docker:

hashicorp/vault#165

@cpuguy83 OK! On that note, [EDIT] have created a new issue on hashicorp's vault about Docker:

hashicorp/vault#165

@duglin

This comment has been minimized.

Show comment
Hide comment
@duglin

duglin Jun 1, 2015

Contributor

I wonder if it wouldn't be easier to solve this by simply allowing people to specify --vault <dir>|<file> on certain docker commands (like run and build). Then the specified dir, or file, is r/o mounted into the container (after the specified files are copied from the CLI machine over to the daemon).

The reason I'm suggesting this is because I'm worried that by having a new entity called "vault" that has a lifecycle that can be quite long, it means that people might forget about it. Which could pose a security issue. Whereas, requiring the client to specify the exact files/dir they want to expose into each container means we can delete those secret files with the container.

e.g. docker run -ti --vault mySecretDir ubuntu bash

Contributor

duglin commented Jun 1, 2015

I wonder if it wouldn't be easier to solve this by simply allowing people to specify --vault <dir>|<file> on certain docker commands (like run and build). Then the specified dir, or file, is r/o mounted into the container (after the specified files are copied from the CLI machine over to the daemon).

The reason I'm suggesting this is because I'm worried that by having a new entity called "vault" that has a lifecycle that can be quite long, it means that people might forget about it. Which could pose a security issue. Whereas, requiring the client to specify the exact files/dir they want to expose into each container means we can delete those secret files with the container.

e.g. docker run -ti --vault mySecretDir ubuntu bash

@calavera

This comment has been minimized.

Show comment
Hide comment
@calavera

calavera Jun 1, 2015

Contributor

I'm very 👎 about this idea. Managing secrets should not be Docker's concern.

External volumes can solve this problem in a nice and non intrusive way. Take for example Keywhiz, a system that gives you strong guarantees to manage secrets. You'll be able to mount a fuse volume inside your containers and see the secrets by using this external volume:

https://github.com/calavera/docker-volume-keywhiz-fs

This is still in experimental, but there are going to be more improvements that will make it land in a release soon.

Contributor

calavera commented Jun 1, 2015

I'm very 👎 about this idea. Managing secrets should not be Docker's concern.

External volumes can solve this problem in a nice and non intrusive way. Take for example Keywhiz, a system that gives you strong guarantees to manage secrets. You'll be able to mount a fuse volume inside your containers and see the secrets by using this external volume:

https://github.com/calavera/docker-volume-keywhiz-fs

This is still in experimental, but there are going to be more improvements that will make it land in a release soon.

@cpuguy83

This comment has been minimized.

Show comment
Hide comment
@cpuguy83

cpuguy83 Jun 1, 2015

Contributor

Agreed, this should not be docker's concern and volume plugins enables some specific functionality around secrets.

Contributor

cpuguy83 commented Jun 1, 2015

Agreed, this should not be docker's concern and volume plugins enables some specific functionality around secrets.

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jun 1, 2015

Member

I agree. At the time this proposal was written, there was no clear strategy for handling secrets in Docker. By lack of alternatives at the time, this was an attempt to get things moving.

Having said that, we still don't have an official roadmap for handling secrets (apart from some rough ideas / concepts), so #13490 is still needed badly.

Member

thaJeztah commented Jun 1, 2015

I agree. At the time this proposal was written, there was no clear strategy for handling secrets in Docker. By lack of alternatives at the time, this was an attempt to get things moving.

Having said that, we still don't have an official roadmap for handling secrets (apart from some rough ideas / concepts), so #13490 is still needed badly.

@duglin

This comment has been minimized.

Show comment
Hide comment
@duglin

duglin Jun 1, 2015

Contributor

I'm not that familiar with those but will non-admin users be able to perform these actions? ie. will it require certain capabilities that might be disabled in certain environments?

Contributor

duglin commented Jun 1, 2015

I'm not that familiar with those but will non-admin users be able to perform these actions? ie. will it require certain capabilities that might be disabled in certain environments?

@pirelenito

This comment has been minimized.

Show comment
Hide comment
@pirelenito

pirelenito Jun 2, 2015

Hi everyone!

We've come up with a simple solution to this problem: A bash script that once executed through a single RUN command, downloads private keys from a local HTTP server, executes a given command and deletes the keys afterwards.

Since we do all of this in a single RUN, nothing gets cached in the image. Here is how it looks in the Dockerfile:

RUN ONVAULT npm install --unsafe-perm

Our first implementation around this concept is available at https://github.com/dockito/vault.

To develop images locally we use a custom development box that runs the Dockito Vault as a service.

The only drawback is requiring the HTTP server running, so no Docker hub builds.

Let me know what you think :)

I had previously mentioned this project at #6396

Hi everyone!

We've come up with a simple solution to this problem: A bash script that once executed through a single RUN command, downloads private keys from a local HTTP server, executes a given command and deletes the keys afterwards.

Since we do all of this in a single RUN, nothing gets cached in the image. Here is how it looks in the Dockerfile:

RUN ONVAULT npm install --unsafe-perm

Our first implementation around this concept is available at https://github.com/dockito/vault.

To develop images locally we use a custom development box that runs the Dockito Vault as a service.

The only drawback is requiring the HTTP server running, so no Docker hub builds.

Let me know what you think :)

I had previously mentioned this project at #6396

@cyphar

This comment has been minimized.

Show comment
Hide comment
@cyphar

cyphar Jun 3, 2015

Contributor

This proposal was to get the ball rolling on the fact that (because of the layered nature of Docker images), there's no reasonable way for us to store data ethereally during the build process such that it isn't saved in the image layers. Docker simply must provide the ability to tag a directory as "ethereal". Sure, the whole secrets thing may have been a bit overboard. A much better proposal IMO would be for us to be able to specify in a Dockerfile that certain directories are ethereal. This instruction would be a directive to the builder, and would not be added as a layer in the image graph. I'll write up a better proposal in a few weeks (if someone doesn't beat me to it :P).

IMHO, in the sober light of day this proposal doesn't strike me as being particularly practical.

Contributor

cyphar commented Jun 3, 2015

This proposal was to get the ball rolling on the fact that (because of the layered nature of Docker images), there's no reasonable way for us to store data ethereally during the build process such that it isn't saved in the image layers. Docker simply must provide the ability to tag a directory as "ethereal". Sure, the whole secrets thing may have been a bit overboard. A much better proposal IMO would be for us to be able to specify in a Dockerfile that certain directories are ethereal. This instruction would be a directive to the builder, and would not be added as a layer in the image graph. I'll write up a better proposal in a few weeks (if someone doesn't beat me to it :P).

IMHO, in the sober light of day this proposal doesn't strike me as being particularly practical.

@duglin

This comment has been minimized.

Show comment
Hide comment
@duglin

duglin Jun 3, 2015

Contributor

@cyphar you may be able to leverage #12594 in your new idea. Consistency with that might be good.

Contributor

duglin commented Jun 3, 2015

@cyphar you may be able to leverage #12594 in your new idea. Consistency with that might be good.

@duglin

This comment has been minimized.

Show comment
Hide comment
@duglin

duglin Jun 3, 2015

Contributor

@cyphar I have some old code that I was playing with that would mount the build context into the target container as a r/o volume. This allowed the executables running (via RUN cmd) access to files that were not COPY/ADD'd into the container/image. Could this be used to solve this vault/secret issue? Assuming people put those files into the build context.

Contributor

duglin commented Jun 3, 2015

@cyphar I have some old code that I was playing with that would mount the build context into the target container as a r/o volume. This allowed the executables running (via RUN cmd) access to files that were not COPY/ADD'd into the container/image. Could this be used to solve this vault/secret issue? Assuming people put those files into the build context.

@calavera

This comment has been minimized.

Show comment
Hide comment
@calavera

calavera Jun 4, 2015

Contributor

IMHO, in the sober light of day this proposal doesn't strike me as being particularly practical.

Awesome to hear that. I'm going to close this issue so we can move a future conversation somewhere else.

🤘

Contributor

calavera commented Jun 4, 2015

IMHO, in the sober light of day this proposal doesn't strike me as being particularly practical.

Awesome to hear that. I'm going to close this issue so we can move a future conversation somewhere else.

🤘

@calavera calavera closed this Jun 4, 2015

@thaJeztah

This comment has been minimized.

Show comment
Hide comment
@thaJeztah

thaJeztah Jun 4, 2015

Member

Thanks so much for trying to get the ball rolling @cyphar ❤️

I suggest to continue the discussion on #13490, which can act as a starting point to determine the way forward.

Member

thaJeztah commented Jun 4, 2015

Thanks so much for trying to get the ball rolling @cyphar ❤️

I suggest to continue the discussion on #13490, which can act as a starting point to determine the way forward.

@arrawatia arrawatia referenced this issue in confluentinc/cp-docker-images Jul 1, 2016

Closed

SSL keystore/truststore dirs and configuration values #3

@sergeyklay sergeyklay referenced this issue in hashicorp/vault Jul 13, 2016

Closed

Docker for Vault #1612

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment