Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Signed images #2700

Closed
shykes opened this issue Nov 14, 2013 · 45 comments
Closed

Signed images #2700

shykes opened this issue Nov 14, 2013 · 45 comments
Labels
area/builder area/distribution kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny

Comments

@shykes
Copy link
Contributor

shykes commented Nov 14, 2013

Docker should support signing images after building them. This allows for a "chain of trust" where the content and origin of an image can be verified cryptographically regardless of how the image was distributed.

@patcito
Copy link

patcito commented Nov 14, 2013

👍

@shykes
Copy link
Contributor Author

shykes commented Nov 14, 2013

A few notes and a micro-spec:

  • Docker images carry with them an optional signature. The signature is embedded in the image format, not in the registry download protocol.
  • Docker can be provided a private key to use when building
  • Docker can be provided multiple public keys to trust
  • Docker can be configured to either issue a warning when downloading or running untrusted images, or flat-out refuse to run them.

My current idea is to use regular GPG keys, and upgrade the image format to carry a GPG signature of its content. I see 2 options to upgrade the format:

  1. An incremental upgrade to the current layer format. We add a signature file at a standard path in the layer. This doesn't break reverse compatibility: old implementation see an extra regular file; new implementation interpret it as a signature.

  2. A switch to an existing format which already supports signature. The most obvious candidate here is git packfiles, which support GPG signature. Using git is already being considered for other reasons, so we could peg image signature to that format upgrade. The downside is introducing fragmentation with 2 different image formats - existing docker installations could no longer use images in the new format (and as of the current version, would not be able to gracefully inform the user of the situation).

@shykes
Copy link
Contributor Author

shykes commented Jan 6, 2014

A few more notes: the GPG signature offered by git is insufficient, because it relies on SHA1 which is cryptographically insufficient. The git authors have emphasized several times that git hashes should not be relied on for strong cryptography.

So whatever we use, it should probably be based on SHA256.

Since we cannot piggyback on git's signature facility (unless it supports a SHA256 extension, which doesn't seem to be the case), we are free to choose our favorite signature mechanism: namely 1) pgp or 2) x509 certs.

@jamtur01
Copy link
Contributor

jamtur01 commented Jan 6, 2014

Can't we just use GPG standalone? Why would we use PGP? Or is that a typo? Also let's steer clear of x509 certificates. They can generate a lot of confusion for people, c.f. Puppet.

@aidenbell
Copy link

+1 for this. GPG would seem like the way to go, something like how yum in fedora handles it (asking if you want to import the key etc). Seems sketchy to not have a signing mechanism when you're essentially trusting something to be your OS/userland layer.

@jdef
Copy link
Contributor

jdef commented Jan 28, 2014

+1 for GPG signing

@wking
Copy link

wking commented Jan 28, 2014

On Mon, Jan 06, 2014 at 11:36:20AM -0800, James Turnbull wrote:

Can't we just use GPG standalone?

This makes the most sense to me. Signing an image, and then stuffing
the signature into the image itself sounds like a headache. Having
hashed data (for the signature) and unhashed data (the signature
itself) in the same space (like RFC 4880's signature packets 1) is
annoying. Also:

Earlier, Solomon Hykes:

  1. An incremental upgrade to the current layer format. We add a
    signature file at a standard path in the layer. This doesn't break
    reverse compatibility: old implementation see an extra regular file;
    new implementation interpret it as a signature.

This is going to make it hard to have several signatures on the same
image (e.g., my old key expired and I want to resign that important
image). How about distributing detached signatures on request with
each image? Then you can shell out to GPG (or use GPGME) for signing
and verification without mucking about with the image format. You'll
have to extend the registry API to distribute the sigs, but that
sounds saner than extending the image. How about:

GET /v1/images/(image_id)/signatures
Returns:
[(signature-hash-1), (signature-hash-2), …]
PUT /v1/images/(image_id)/signature
GET /v1/images/(image_id)/signature/(signature-hash)
DELETE /v1/images/(image_id)/signature/(signature-hash)

The signature hashing is just collision-avoidance, so you don't need
crypographic security. Use whichever hash you like. Signing and
verification happens in GPG (or other OpenPGP implementation), so the
signer and verifier can work out the cryptographic parts of the
exchange between themselves. The registry just acts as a go-between.

@ewindisch
Copy link
Contributor

Earlier, Solomon Hykes:

  1. An incremental upgrade to the current layer format. We add a
    signature file at a standard path in the layer. This doesn't break
    reverse compatibility: old implementation see an extra regular file;
    new implementation interpret it as a signature.

This is going to make it hard to have several signatures on the same
image (e.g., my old key expired and I want to resign that important
image).

While it is possible to use "detached" signature packets, an OpenPGP
message may technically be constructed having multiple signature packets.
Although to be honest, I'm not sure how well implementations deal with it
as section 11.3 of RFC4880 provides an example of a Signed Message which
includes only a single signature packet. Still, there is nothing that
prevents an implementation from supporting the inclusion and verification
of multiple signatures.

Note in such a scenario, the signed data itself must be included in the
OpenPGP message. That might be acceptable if we signed hashes, for instance.

Regards,
Eric Windisch

@wking
Copy link

wking commented Jan 28, 2014

On Tue, Jan 28, 2014 at 10:45:16AM -0800, Eric Windisch wrote:

Note in such a scenario, the signed data itself must be included in
the OpenPGP message. That might be acceptable if we signed hashes,
for instance.

So you can have a single OpenPGP message with several signature
packets if you get everyone to agree on which hash to use? I don't
see why you'd want to make that decision in Docker itself, when you
could offload it to the clients and their local OpenPGP implementation
/ configuration. Even if you can reach a consensus today, offloading
hash selection future-proofs your API as the “safe hash” goalposts
shift in the future.

@ewindisch
Copy link
Contributor

On Tue, Jan 28, 2014 at 2:03 PM, W. Trevor King notifications@github.comwrote:

On Tue, Jan 28, 2014 at 10:45:16AM -0800, Eric Windisch wrote:

Note in such a scenario, the signed data itself must be included in
the OpenPGP message. That might be acceptable if we signed hashes,
for instance.

So you can have a single OpenPGP message with several signature
packets if you get everyone to agree on which hash to use?

Not necessarily. I was just saying it isn't so black and white. Everyone
could agree on the hash algorithm for that individual image (and if they
don't agree, they don't sign), or have several hashes per image. However,
I'd like to avoid complexity and this path might be too far away from that
goal. I agree detached signatures are preferable.

I'm thinking not just of signatures for verifying downloads/uploads to the
registry, but of local images and those that have been exported.

What I'd like to avoid is making the transport of and glue around verifying
those signatures overly complex. Back to Solomon's "#1" suggestion, I agree
that you can't sign the image and stuff data back into it. That simply
doesn't work. You can wrap everything (ala a ful OpenPGP message) or you
can ship the data along side it.

Technically, you could sign a tar and then extend it with the signature at
the end, then infer the originally signed tar by truncating the file... but
again, we're getting into complexity. That WOULD satisfy the backwards
compatibility requirement, but would make verification and signing a bit
messy.

Even if you can reach a consensus today, offloading

hash selection future-proofs your API as the "safe hash" goalposts
shift in the future.

As long as OpenPGP supports those goalposts ;-)

Regards,
Eric Windisch

@wking
Copy link

wking commented Jan 28, 2014

On Tue, Jan 28, 2014 at 01:41:59PM -0800, Eric Windisch wrote:

I'm thinking not just of signatures for verifying downloads/uploads
to the registry, but of local images and those that have been
exported.

You store local signatures (downloaded and locally-created) in
/var/lib/docker/signatures/(signature-hash) and keep an image-side
index (which signatures go with this image) in JSON in an image's
directory (/var/lib/docker/graph/(image-hash)/signatures?).

What I'd like to avoid is making the transport of and glue around
verifying those signatures overly complex.

I think the API I floated above for communicating with the registry is
fairly simple, as is local storage. Are there other kinds of
transport besides local ↔ registry?

@ewindisch
Copy link
Contributor

On Tue, Jan 28, 2014 at 4:56 PM, W. Trevor King notifications@github.comwrote:

On Tue, Jan 28, 2014 at 01:41:59PM -0800, Eric Windisch wrote:

I'm thinking not just of signatures for verifying downloads/uploads
to the registry, but of local images and those that have been
exported.

You store local signatures (downloaded and locally-created) in
/var/lib/docker/signatures/(signature-hash) and keep an image-side
index (which signatures go with this image) in JSON in an image's
directory (/var/lib/docker/graph/(image-hash)/signatures?).

Will there be so many signatures per image that we couldn't just have a
single JSON or other file that contained the image's signatures directly?

The index/signature-file split seems excessive. The only advantage I see is
that the gpg command-line tool could access the files directly, something
of limited use since the signature hashes couldn't be easily mapped to the
images by end-users.

What I'd like to avoid is making the transport of and glue around
verifying those signatures overly complex.

I think the API I floated above for communicating with the registry is
fairly simple, as is local storage. Are there other kinds of
transport besides local ↔ registry?

Image "save" will create a file export for a repository which could be
transported by SSH, USB stick, whatever; then imported.

Regards,
Eric Windisch

@wking
Copy link

wking commented Jan 28, 2014

On Tue, Jan 28, 2014 at 02:11:56PM -0800, Eric Windisch wrote:

The index/signature-file split seems excessive. The only advantage I
see is that the gpg command-line tool could access the files
directly, something of limited use since the signature hashes
couldn't be easily mapped to the images by end-users.

Good point. Packing into JSON sounds better.

Image "save" will create a file export for a repository which could
be transported by SSH, USB stick, whatever; then imported.

So attach the JSON signature file to a known location in the saved
file, and then pull it back out during load's unpacking. Maybe that's
what @shykes was suggesting originally, and I just got confused
between layers and saved bundles?

@skull-squadron
Copy link
Contributor

GPG cryptographic signatures >= SHA256 hashes

GPG WoT done right (similar to how Linux devs work) really requires in-person signing / exchanging of keys. Since that's too much trouble for most people, curating a "ca bundle" of trusted keys or key ids is the next best thing. Also, probably will want a separate --homedir so it doesn't muck with the user's keychain. Underlying detached signatures are the way to go. To do that, a meta container (say a .dock - a plain tar containing metadata.json container.txz container.txz.asc etc.) would help at the expense of breakage and benefit of "future proof" extensibility. This is similar to how both .debs and .rpms are organized. I've BTDTBTTS (see 👕) on this with another commercial project in the evil empire and this worked well.

@ewindisch @wking Also don't get too fancy trying to over-engineer by over-optimizing it such that regular nix tools can't operate on it. The format should be very accessible so third-party devs / companies can build compatible packages easily. Otherwise, such a development decision may quickly alienate ops people that have far more experience than you or I do managing many more systems in production and that might have an idea what scales and what won't. Accessible = more commercial adoption of Docker in enterprise environments that have money to keep Docker, pun intended, afloat.

* Yes, there are a few sysadmins with more than 40 years of experience that have administered vast swats of boxes with crunchy business gunk that see the benefit of supporting 12factors-like architectures.

In fact, a bold move would be to eventually make it into an RFC... that's not only free PR but shows commitment to open standards. cc @shykes

@unclejack unclejack modified the milestones: 1.0, 0.9.0 Mar 3, 2014
@vbatts
Copy link
Contributor

vbatts commented Mar 17, 2014

A couple of comments.

Just signing the tars is not going to be quite all that is needed. Validation of signed images would also be needed. Such that when running images, there is a use case to only run images that validate signatures. And that validation would be of the likeness of rpm -V <name>, which can confirm whether the contents of the rpm have been tampered with since being install.
Currently this would be terribly expensive operation, though I'm not sure what next option there is that would provide the same assurance. I am actively working to fine a solution for this, for our use-case.

Once signed images are available, there is business case for having a capacity on the public registry to allow rejection of certain key issuers. Such that if someone as layered on top of a private-and-signed image, that there could be a mechanism to prevent them from pushing private content to the public registry.

On the note of running signed/validated images, it would also provide an operational model that would allow folks to have an internal set of keys, like QA, STAGE, PROD. Where the machines in prod, et al, would only run images signed with that key. Allowing strictness of promotion of images.

thoughts? cc @shykes

@nikicat
Copy link

nikicat commented Mar 23, 2014

👍

@jdef
Copy link
Contributor

jdef commented Mar 24, 2014

+1

--sent from my phone
On Mar 17, 2014 12:10 PM, "Vincent Batts" notifications@github.com wrote:

A couple of comments.

Just signing the tars is not going to be quite all that is needed.
Validation of signed images would also be needed. Such that when running
images, there is a use case to only run images that validate signatures.
And that validation would be of the likeness of rpm -V , which can
confirm whether the contents of the rpm have been tampered with since being
install.
Currently this would be terribly expensive operation, though I'm not sure
what next option there is that would provide the same assurance. I am
actively working to fine a solution for this, for our use-case.

Once signed images are available, there is business case for having a
capacity on the public registry to allow rejection of certain key issuers.
Such that if someone as layered on top of a private-and-signed image, that
there could be a mechanism to prevent them from pushing private content to
the public registry.

On the note of running signed/validated images, it would also provide an
operational model that would allow folks to have an internal set of keys,
like QA, STAGE, PROD. Where the machines in prod, et al, would only run
images signed with that key. Allowing strictness of promotion of images.

thoughts?

Reply to this email directly or view it on GitHubhttps://github.com//issues/2700#issuecomment-37834457
.

@smarterclayton
Copy link
Contributor

@vbatts Thinking out loud here, verification is expensive because the tar has to be reconstructed, which is a diff of two filesystem trees. Part of the role of the tar is also storing dates, permissions, uids, and other fsattr data and providing a diff of those. Images are assumed to immutable once on disk - part of verification is double checking that assumption. So.... maybe the act of creating an image on disk should generate a manifest (basically the tar with file contents replaced with a hash, or a flat file, or *), and then operations that deal with comparing/diffing images should use that. Verification then becomes "compare manifest to disk". Still expensive, but at least you can sign the manifest in both places

@vbatts
Copy link
Contributor

vbatts commented Apr 21, 2014

The expense is due to the TarSum requiring a the tar for a layer to be exported, since the build and runtime are laid out filesystems. No diff'ing required. Though the more efficient use of cycles would be to have a 'publish' or 'prepare' step for images, that would allow for a manifest of the image to be produced, and the sum for the image (or layers). This would be the time that the image is signed.
/cc @smarterclayton

@aweiteka
Copy link

aweiteka commented May 6, 2014

Here's a design proposal I put together based on several discussions. Thoughts? @shykes

@wking
Copy link

wking commented May 7, 2014

On Tue, May 06, 2014 at 12:25:17PM -0700, Aaron Weitekamp wrote:

Here's a design proposal I put together based on several
discussions.

  • Decouple image name and location +1

  • Deterministically generated image IDs +1

  • Maintainer fingerprint in Dockerfile -1

    For example, this won't let you sign trusted builds, which have an
    arbitrary Dockerfile maintainer but are all built by Docker, Inc.
    I'd like the option to have several sigs on a single image 1.
    Signing the dockerfile should be independent of building it, and
    your 'docker build --sign …' should just be for convenience.

  • TO defines the layer or application name -1

    By analogy with Git, I like signed tags that can be attached to
    commits (images). I want to sign the deterministic image ID with a
    name I pick (so the signature asserts “Trevor thinks image
    is
    <trevor's-chosen-namespace>/<trevor's-chosen-image-name>:<trevor's-chosen-image-tag>”),
    not sign the already-named image (which would assert “Trevor agrees
    that is
    /:”).

  • SOURCE to optionally point to source repository +1

  • DESCRIPTION to automatically supply image description +1

    This would allow us to push descriptions with the existing API,
    which is something docker-registry needs 2.

  • META as an arbitrary list of key:value pairs +1

    Although I'd use META <local-path> to load JSON from the
    Dockerfile context. A list of key/value pairs doesn't make sense to
    me ;).

  • Squashing build artifact layers so a single, logical image layer
    is the result of the build -1

    I think this squashing is orthogonal to signed images. If you can
    sign one squashed image, you can also sign each image in the
    unsquashed stack. Squashing just makes verification cheaper (fewer
    tarsums), which is nice, but should be optional.

  • Storing the build timestamp as metadata -1

    I don't see how this relates to signing at all.

@wking
Copy link

wking commented May 7, 2014

With stand-alone sigs 1, you'd just need something like:

$ docker fingerprint


Then you could sign any image with:

$ docker fingerprint | gpg --detach-sign --armor > .sig

and verify with:

$ docker fingerprint | gpg --verify .sig -

I think that's simple enough that it's worth separate signature
entries in the registry.

@wking
Copy link

wking commented May 7, 2014

On Wed, May 07, 2014 at 12:47:15PM -0700, W. Trevor King wrote:

With stand-alone sigs [1], you'd just need something like:

$ docker fingerprint


Actually, I'd want:

$ docker fingerprint
/:

Who cares what the parent tarsum was? Anybody signing the image
should verify to their own satisfaction that the parent (if a parent
exists) is valid before signing images built from that parent. For
example, if I 'docker import' today's Gentoo stage3 tarball and I
trust the “Gentoo Linux Release Engineering (Automated Weekly Release
Key)” used to sign it, I can use my scheme to sign the resulting image
as wking/gentoo:2014-05-07.

@vbatts
Copy link
Contributor

vbatts commented May 7, 2014

A lot to respond to here, but I agree that getting the parent's tarsums
won't be as needed with a deterministic hashid and tarsum of current layer
which includes the reference to the parent hash.
On May 7, 2014 3:59 PM, "W. Trevor King" notifications@github.com wrote:

On Wed, May 07, 2014 at 12:47:15PM -0700, W. Trevor King wrote:

With stand-alone sigs [1], you'd just need something like:

$ docker fingerprint


Actually, I'd want:

$ docker fingerprint
/:

Who cares what the parent tarsum was? Anybody signing the image
should verify to their own satisfaction that the parent (if a parent
exists) is valid before signing images built from that parent. For
example, if I 'docker import' today's Gentoo stage3 tarball and I
trust the “Gentoo Linux Release Engineering (Automated Weekly Release
Key)” used to sign it, I can use my scheme to sign the resulting image
as wking/gentoo:2014-05-07.


Reply to this email directly or view it on GitHubhttps://github.com//issues/2700#issuecomment-42475543
.

@creack creack modified the milestones: 1.1, 1.0 May 12, 2014
@cyphar
Copy link
Contributor

cyphar commented May 13, 2014

You could use signify, if you only wanted to sign images (it reduces the requirements of chains of trust, etc).

However, I +1 for the use of a full GPG implementation if we want to go that route (chains of trust are very useful when managing packages for deployment, as you can have essentially "root" keys which are used to sign other keys).

@yosifkit
Copy link
Contributor

yosifkit commented Jul 8, 2014

Wouldn't the simplest approach be to make the image/layer ids be a hash based on the content? You may need to ignore timestamp changes to minimize "fake" changes. There would be the caveat of a user using docker commit after only changing the timestamp of a file and that would need to be addressed.

You then sign that hash with GPG or the agreed upon signing mechanism and you effectively sign all the content of the layer as well. Verification of any layer (ex: is from trusted user and unaltered content) could be configured to be done on each docker pull or through a manual process like docker verify imageid. This would be an expensive process (tarsum) but would ensure that the layer is the correct content.

This would have the added benefit of knowing when a build on the hub actually changes content.

@vbatts
Copy link
Contributor

vbatts commented Jul 8, 2014

There is already the TarSum the does this. We have moved ahead with having
a utility that allows doing this exact process, using this TarSum. See
http://github.com/vbatts/docker-utils for this dockertarsum command.
On Jul 8, 2014 7:02 PM, "yosifkit" notifications@github.com wrote:

Wouldn't the simplest approach be to make the image/layer ids be a hash
based on the content? You may need to ignore timestamp changes to minimize
"fake" changes. There would be the caveat of a user using docker commit
after only changing the timestamp of a file and that would need to be
addressed.

You then sign that hash with GPG or the agreed upon signing mechanism and
you effectively sign all the content of the layer as well. Verification of
any layer (ex: is from trusted user and unaltered content) could be
configured to be done on each docker pull or through a manual process
like docker verify imageid. This would be an expensive process (tarsum)
but would ensure that the layer is the correct content.

This would have the added benefit of knowing when a build on the hub
actually changes content.


Reply to this email directly or view it on GitHub
#2700 (comment).

@ewindisch
Copy link
Contributor

Depends on hashes as layer ids: #6959

@dstufft
Copy link

dstufft commented Oct 17, 2014

Note that gpg is not sufficient for properly handling signatures for docker. It could be a piece in the chain but the WOT does not solve a very important problem, namely does key so and so have permission to be signing for a container named Y. Simply using gpg would mean that you can only have the problem where in order to cryptographically validate images you have to trust everyone for all images globally.

You might try taking a look at "the update framework". We're considering using it for PyPI which has similar problems and it also solves problems like "MITM attacker prevents a user from seeing there is an updated copy of something by blocking the attempts to reach out".

@wking
Copy link

wking commented Oct 17, 2014

On Thu, Oct 16, 2014 at 07:04:43PM -0700, Donald Stufft wrote:

Note that gpg is not sufficient for properly handling signatures for
docker. It could be a piece in the chain but the WOT does not solve
a very important problem, namely does key so and so have permission
to be signing for a container named Y.

I don't understand this. If I trust you to sign one image/tag
association, then I trust you to sign any image/tag association. You
may not be authorized to push the ‘debian’ image to the registry, but
if I trust you to sign responsibly, I see no reason that I shouldn't
trust your signature for ‘debian:6.0.10’.

Simply using gpg would mean that you can only have the problem where
in order to cryptographically validate images you have to trust
everyone for all images globally.

Yes, you'd have to do this if you wanted to trust all the signed
images folks had pushed to the registry. But why would you want to do
that? I'd only trust images that were signed by people I knew to be
contentious signers. Or images signed by people those people vouched
for.

@jessfraz jessfraz added the kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny label Feb 26, 2015
@stevvooe
Copy link
Contributor

@cyphar
Copy link
Contributor

cyphar commented Jun 30, 2015

GPG would work if we have a set of keys registered to accounts. Then you have to verify that the signature was signed by one of those keys. GPG has solved this problem already, I'm not sure why we don't just leverage it. The purpose of the WOT is not to provide authority for someone to sign something, it is to provide assurance that the key's owner is who the sender says it is. This is not really relevant in the Docker registry model, because we already have a central authority on "who's who". Revocation and other such things are going to be a pain though, but I can't really think of any nice way of pushing revocation lists to clients.

@dreamcat4
Copy link

@cyphar The solution was recently announced and demo'ed at dockercon 2015... the new 'notary' tool / library. You can see it in action during the dockercon 2015 keynote video.

https://github.com/docker/notary

@stevvooe
Copy link
Contributor

@cyphar

This is not really relevant in the Docker registry model, because we already have a central authority on "who's who". Revocation and other such things are going to be a pain though, but I can't really think of any nice way of pushing revocation lists to clients.

This is a strong assumption. Can you always trust the registry? Should you have to?

@dstufft
Copy link

dstufft commented Jun 30, 2015

If you can trust the registry, then just use TLS.

@ewindisch
Copy link
Contributor

A proposal to introduce https://github.com/docker/notary integration is in the works and should be linked to this issue once available.

@cyphar
Copy link
Contributor

cyphar commented Jul 1, 2015

@stevvooe Well. Is there another way of giving authority to certain people? You can't have your cake and eat it. If you want to make sure that some random person doesn't sign bad images and send them to people then you need to have a central authority (that you then must trust) which will give you the list of people who own certain projects. The other option is for people to manually import keys they trust into their keyring -- and then the onus is on the user to make sure they actually trust the person whose key they are importing.

I prefer the latter, because it's how GPG is meant to work, but the issue is that you now need to be doubly sure that the person who signed it (and obviously has set MAINTAINER to reference them) is a person you trust to sign that particular image. Having a central authority fixes that problem, but then you need to trust the authority.

I guess you could argue that using WOT to map a key to a project or account might work (you get people to sign statements that say "this project belongs to this key -- this statement will expire on this date"), but that relies on people not randomly signing others as being owners of things. And it also relies on an active community to do so.

@cyphar
Copy link
Contributor

cyphar commented Jul 1, 2015

Ultimately, I don't think that using the WoT model makes sense for Docker. We are a monolithic binary that manages containers that pulls images from a single registry. Nothing about this model seems to be a good fit for a distributed trust system like the WoT.

@stevvooe
Copy link
Contributor

stevvooe commented Jul 2, 2015

@cyphar I'd recommend you check out https://github.com/docker/notary. These issues are being considered in detail within that project.

While there will be an central authority, we want to decouple that authority from the registry. This makes sure the registry serves bits while something else takes care of the trust aspect. The two services have much different scaling and security requirements so decoupling them has many benefits.

@andrewwebber
Copy link

As a keep observer of this thread what makes the following systemd unit approach stupid.
Is this thread mainly about making the whole approach unified by the docker cli or much more than that.

[Service]
TimeoutSec=0
ExecStartPre=-/usr/bin/sh -c "source /etc/profile.d/etcdctl.sh && /usr/bin/wget -nv -N -P /opt $(etcdctl --no-sync get /services/pkg)/couchbase.container.tar.gz"
ExecStartPre=-/usr/bin/sh -c "source /etc/profile.d/etcdctl.sh && /usr/bin/wget -N -P /opt $(etcdctl --no-sync get /services/pkg)/couchbase.container.tar.gz.sig"
ExecStartPre=/usr/bin/gpg --verify --trusted-key 1FATBOY12345678 /opt/couchbase.container.tar.gz.sig
ExecStartPre=/usr/bin/docker load -i /opt/couchbase.container.tar.gz
ExecStartPre=-/usr/bin/mkdir /home/core/couchbase
ExecStartPre=/usr/bin/chown 999:999 /home/core/couchbase
ExecStartPre=-/usr/bin/docker kill couchbase
ExecStartPre=-/usr/bin/docker rm -f couchbase
ExecStart=-/usr/bin/sh -c 'source /etc/profile.d/etcdctl.sh && /usr/bin/docker run --name couchbase --net="host" -v /home/core/couchbase:/opt/couchbase/var -e ETCDCTL_PEERS=http://10.10.2.2:4001 --ulimit nofile=40960:40960 --ulimit core=100000000:100000000 --ulimit memlock=100000000:100000000 andrewwebber/couchbase'
ExecStop=/usr/bin/docker kill --signal=SIGTERM couchbase
Restart=always
RestartSec=20

@stevvooe
Copy link
Contributor

stevvooe commented Jul 2, 2015

@andrewwebber I am not an expert on the attacker model, but the TUF documentation can elaborate on the problems a simple GPG approach.

The approach in that unit file is not "stupid". It just doesn't handle key distribution, trusted naming and secure updating.

@endophage
Copy link
Contributor

@andrewwebber The biggest problem with using simple GPG signing is that once a piece of content has been signed, that signature is valid for as long as the key is valid. In the context of containers (or any software) it behooves us to be able to revoke the validity of a signature in the case that a major vulnerability is found, thus preventing any further installations and enabling people already holding a copy to detect they should upgrade.

TUF handles this via the signed targets list (removing a target from this list and resigning indicates the target is no longer valid) and its layered freshness guarantees (validity period of signatures is independent of key validity).

@NathanMcCauley
Copy link
Contributor

We've put together a proposal to have Docker use The Update Framework as implemented by notary. Please take a look at the design document. Comments welcome!

@icecrime icecrime removed this from the next milestone Jul 17, 2015
@bfirsh
Copy link
Contributor

bfirsh commented Aug 13, 2015

@bfirsh bfirsh closed this as completed Aug 13, 2015
@cyphar
Copy link
Contributor

cyphar commented Aug 13, 2015

👍:shipit:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/builder area/distribution kind/feature Functionality or other elements that the project doesn't currently have. Features are new and shiny
Projects
None yet
Development

No branches or pull requests