New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Signed images #2700
Comments
👍 |
A few notes and a micro-spec:
My current idea is to use regular GPG keys, and upgrade the image format to carry a GPG signature of its content. I see 2 options to upgrade the format:
|
A few more notes: the GPG signature offered by git is insufficient, because it relies on SHA1 which is cryptographically insufficient. The git authors have emphasized several times that git hashes should not be relied on for strong cryptography. So whatever we use, it should probably be based on SHA256. Since we cannot piggyback on git's signature facility (unless it supports a SHA256 extension, which doesn't seem to be the case), we are free to choose our favorite signature mechanism: namely 1) pgp or 2) x509 certs. |
Can't we just use GPG standalone? Why would we use PGP? Or is that a typo? Also let's steer clear of x509 certificates. They can generate a lot of confusion for people, c.f. Puppet. |
+1 for this. GPG would seem like the way to go, something like how yum in fedora handles it (asking if you want to import the key etc). Seems sketchy to not have a signing mechanism when you're essentially trusting something to be your OS/userland layer. |
+1 for GPG signing |
On Mon, Jan 06, 2014 at 11:36:20AM -0800, James Turnbull wrote:
This makes the most sense to me. Signing an image, and then stuffing Earlier, Solomon Hykes:
This is going to make it hard to have several signatures on the same GET /v1/images/(image_id)/signatures The signature hashing is just collision-avoidance, so you don't need |
While it is possible to use "detached" signature packets, an OpenPGP Note in such a scenario, the signed data itself must be included in the Regards, |
On Tue, Jan 28, 2014 at 10:45:16AM -0800, Eric Windisch wrote:
So you can have a single OpenPGP message with several signature |
On Tue, Jan 28, 2014 at 2:03 PM, W. Trevor King notifications@github.comwrote:
Not necessarily. I was just saying it isn't so black and white. Everyone I'm thinking not just of signatures for verifying downloads/uploads to the What I'd like to avoid is making the transport of and glue around verifying Technically, you could sign a tar and then extend it with the signature at Even if you can reach a consensus today, offloading
As long as OpenPGP supports those goalposts ;-) Regards, |
On Tue, Jan 28, 2014 at 01:41:59PM -0800, Eric Windisch wrote:
You store local signatures (downloaded and locally-created) in
I think the API I floated above for communicating with the registry is |
On Tue, Jan 28, 2014 at 4:56 PM, W. Trevor King notifications@github.comwrote:
Will there be so many signatures per image that we couldn't just have a The index/signature-file split seems excessive. The only advantage I see is
Image "save" will create a file export for a repository which could be Regards, |
On Tue, Jan 28, 2014 at 02:11:56PM -0800, Eric Windisch wrote:
Good point. Packing into JSON sounds better.
So attach the JSON signature file to a known location in the saved |
GPG cryptographic signatures >= SHA256 hashes GPG WoT done right (similar to how Linux devs work) really requires in-person signing / exchanging of keys. Since that's too much trouble for most people, curating a "ca bundle" of trusted keys or key ids is the next best thing. Also, probably will want a separate @ewindisch @wking Also don't get too fancy trying to over-engineer by over-optimizing it such that regular nix tools can't operate on it. The format should be very accessible so third-party devs / companies can build compatible packages easily. Otherwise, such a development decision may quickly alienate ops people that have far more experience than you or I do managing many more systems in production and that might have an idea what scales and what won't. Accessible = more commercial adoption of Docker in enterprise environments that have money to keep Docker, pun intended, afloat. * Yes, there are a few sysadmins with more than 40 years of experience that have administered vast swats of boxes with crunchy business gunk that see the benefit of supporting 12factors-like architectures. In fact, a bold move would be to eventually make it into an RFC... that's not only free PR but shows commitment to open standards. cc @shykes |
A couple of comments. Just signing the tars is not going to be quite all that is needed. Validation of signed images would also be needed. Such that when running images, there is a use case to only run images that validate signatures. And that validation would be of the likeness of Once signed images are available, there is business case for having a capacity on the public registry to allow rejection of certain key issuers. Such that if someone as layered on top of a private-and-signed image, that there could be a mechanism to prevent them from pushing private content to the public registry. On the note of running signed/validated images, it would also provide an operational model that would allow folks to have an internal set of keys, like QA, STAGE, PROD. Where the machines in prod, et al, would only run images signed with that key. Allowing strictness of promotion of images. thoughts? cc @shykes |
👍 |
+1 --sent from my phone
|
@vbatts Thinking out loud here, verification is expensive because the tar has to be reconstructed, which is a diff of two filesystem trees. Part of the role of the tar is also storing dates, permissions, uids, and other fsattr data and providing a diff of those. Images are assumed to immutable once on disk - part of verification is double checking that assumption. So.... maybe the act of creating an image on disk should generate a manifest (basically the tar with file contents replaced with a hash, or a flat file, or *), and then operations that deal with comparing/diffing images should use that. Verification then becomes "compare manifest to disk". Still expensive, but at least you can sign the manifest in both places |
The expense is due to the TarSum requiring a the tar for a layer to be exported, since the build and runtime are laid out filesystems. No diff'ing required. Though the more efficient use of cycles would be to have a 'publish' or 'prepare' step for images, that would allow for a manifest of the image to be produced, and the sum for the image (or layers). This would be the time that the image is signed. |
Here's a design proposal I put together based on several discussions. Thoughts? @shykes |
On Tue, May 06, 2014 at 12:25:17PM -0700, Aaron Weitekamp wrote:
|
With stand-alone sigs 1, you'd just need something like: Then you could sign any image with: $ docker fingerprint | gpg --detach-sign --armor > .sig and verify with: $ docker fingerprint | gpg --verify .sig - I think that's simple enough that it's worth separate signature |
On Wed, May 07, 2014 at 12:47:15PM -0700, W. Trevor King wrote:
Actually, I'd want: Who cares what the parent tarsum was? Anybody signing the image |
A lot to respond to here, but I agree that getting the parent's tarsums
|
You could use signify, if you only wanted to sign images (it reduces the requirements of chains of trust, etc). However, I +1 for the use of a full GPG implementation if we want to go that route (chains of trust are very useful when managing packages for deployment, as you can have essentially "root" keys which are used to sign other keys). |
Wouldn't the simplest approach be to make the image/layer ids be a hash based on the content? You may need to ignore timestamp changes to minimize "fake" changes. There would be the caveat of a user using You then sign that hash with GPG or the agreed upon signing mechanism and you effectively sign all the content of the layer as well. Verification of any layer (ex: is from trusted user and unaltered content) could be configured to be done on each This would have the added benefit of knowing when a build on the hub actually changes content. |
There is already the TarSum the does this. We have moved ahead with having
|
Depends on hashes as layer ids: #6959 |
Note that gpg is not sufficient for properly handling signatures for docker. It could be a piece in the chain but the WOT does not solve a very important problem, namely does key so and so have permission to be signing for a container named Y. Simply using gpg would mean that you can only have the problem where in order to cryptographically validate images you have to trust everyone for all images globally. You might try taking a look at "the update framework". We're considering using it for PyPI which has similar problems and it also solves problems like "MITM attacker prevents a user from seeing there is an updated copy of something by blocking the attempts to reach out". |
On Thu, Oct 16, 2014 at 07:04:43PM -0700, Donald Stufft wrote:
I don't understand this. If I trust you to sign one image/tag
Yes, you'd have to do this if you wanted to trust all the signed |
GPG would work if we have a set of keys registered to accounts. Then you have to verify that the signature was signed by one of those keys. GPG has solved this problem already, I'm not sure why we don't just leverage it. The purpose of the WOT is not to provide authority for someone to sign something, it is to provide assurance that the key's owner is who the sender says it is. This is not really relevant in the Docker registry model, because we already have a central authority on "who's who". Revocation and other such things are going to be a pain though, but I can't really think of any nice way of pushing revocation lists to clients. |
@cyphar The solution was recently announced and demo'ed at dockercon 2015... the new 'notary' tool / library. You can see it in action during the dockercon 2015 keynote video. |
This is a strong assumption. Can you always trust the registry? Should you have to? |
If you can trust the registry, then just use TLS. |
A proposal to introduce https://github.com/docker/notary integration is in the works and should be linked to this issue once available. |
@stevvooe Well. Is there another way of giving authority to certain people? You can't have your cake and eat it. If you want to make sure that some random person doesn't sign bad images and send them to people then you need to have a central authority (that you then must trust) which will give you the list of people who own certain projects. The other option is for people to manually import keys they trust into their keyring -- and then the onus is on the user to make sure they actually trust the person whose key they are importing. I prefer the latter, because it's how GPG is meant to work, but the issue is that you now need to be doubly sure that the person who signed it (and obviously has set I guess you could argue that using WOT to map a key to a project or account might work (you get people to sign statements that say "this project belongs to this key -- this statement will expire on this date"), but that relies on people not randomly signing others as being owners of things. And it also relies on an active community to do so. |
Ultimately, I don't think that using the WoT model makes sense for Docker. We are a monolithic binary that manages containers that pulls images from a single registry. Nothing about this model seems to be a good fit for a distributed trust system like the WoT. |
@cyphar I'd recommend you check out https://github.com/docker/notary. These issues are being considered in detail within that project. While there will be an central authority, we want to decouple that authority from the registry. This makes sure the registry serves bits while something else takes care of the trust aspect. The two services have much different scaling and security requirements so decoupling them has many benefits. |
As a keep observer of this thread what makes the following systemd unit approach stupid. [Service]
TimeoutSec=0
ExecStartPre=-/usr/bin/sh -c "source /etc/profile.d/etcdctl.sh && /usr/bin/wget -nv -N -P /opt $(etcdctl --no-sync get /services/pkg)/couchbase.container.tar.gz"
ExecStartPre=-/usr/bin/sh -c "source /etc/profile.d/etcdctl.sh && /usr/bin/wget -N -P /opt $(etcdctl --no-sync get /services/pkg)/couchbase.container.tar.gz.sig"
ExecStartPre=/usr/bin/gpg --verify --trusted-key 1FATBOY12345678 /opt/couchbase.container.tar.gz.sig
ExecStartPre=/usr/bin/docker load -i /opt/couchbase.container.tar.gz
ExecStartPre=-/usr/bin/mkdir /home/core/couchbase
ExecStartPre=/usr/bin/chown 999:999 /home/core/couchbase
ExecStartPre=-/usr/bin/docker kill couchbase
ExecStartPre=-/usr/bin/docker rm -f couchbase
ExecStart=-/usr/bin/sh -c 'source /etc/profile.d/etcdctl.sh && /usr/bin/docker run --name couchbase --net="host" -v /home/core/couchbase:/opt/couchbase/var -e ETCDCTL_PEERS=http://10.10.2.2:4001 --ulimit nofile=40960:40960 --ulimit core=100000000:100000000 --ulimit memlock=100000000:100000000 andrewwebber/couchbase'
ExecStop=/usr/bin/docker kill --signal=SIGTERM couchbase
Restart=always
RestartSec=20 |
@andrewwebber I am not an expert on the attacker model, but the TUF documentation can elaborate on the problems a simple GPG approach. The approach in that unit file is not "stupid". It just doesn't handle key distribution, trusted naming and secure updating. |
@andrewwebber The biggest problem with using simple GPG signing is that once a piece of content has been signed, that signature is valid for as long as the key is valid. In the context of containers (or any software) it behooves us to be able to revoke the validity of a signature in the case that a major vulnerability is found, thus preventing any further installations and enabling people already holding a copy to detect they should upgrade. TUF handles this via the signed targets list (removing a target from this list and resigning indicates the target is no longer valid) and its layered freshness guarantees (validity period of signatures is independent of key validity). |
We've put together a proposal to have Docker use The Update Framework as implemented by notary. Please take a look at the design document. Comments welcome! |
👍 |
Docker should support signing images after building them. This allows for a "chain of trust" where the content and origin of an image can be verified cryptographically regardless of how the image was distributed.
The text was updated successfully, but these errors were encountered: