Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image Signing Support #30603

Closed
zhouhaibing089 opened this issue Aug 15, 2016 · 56 comments
Closed

Image Signing Support #30603

zhouhaibing089 opened this issue Aug 15, 2016 · 56 comments
Labels
area/kubectl lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@zhouhaibing089
Copy link
Contributor

zhouhaibing089 commented Aug 15, 2016

these days, there is a proposal #27129 states we should be able to do an image review, for example, to ensure that the image does not have any vulnerabilities, we may set up an Clair instance, and enabling an admission controller to query its api to verify this image is safe to run.

However there is one thing more to consider as well, there is a concept which is named Docker Content Trust, it is used to ensure we always get the correct image, the correct means we are pulling images from the right source, and it is exactly pushed by some people(including CI).

I did not get the answer clearly on how will this be involved in, for example, the DockerInterface may introduce another method, something like VerifyImage, or it could be part of PullImage.

cc @erictune @liggitt

@k8s-github-robot k8s-github-robot added area/kubectl sig/node Categorizes an issue or PR as relevant to SIG Node. labels Aug 15, 2016
@nhlfr
Copy link

nhlfr commented Aug 17, 2016

But Kubernetes is already using Docker Content Trust. Try the following pod:

apiVersion: v1
kind: Pod
metadata:
  name: trusttest
  labels:
    app: trusttest
spec:
  containers:
    - name: trusttest
      image: docker/trusttest:latest
$ ./cluster/kubectl.sh create -f ~/playground/k8s/trusttest.yaml 
pod "trusttest" created
$ ./cluster/kubectl.sh get pods --watch
NAME        READY     STATUS              RESTARTS   AGE
trusttest   0/1       ContainerCreating   0          6s
NAME        READY     STATUS         RESTARTS   AGE
trusttest   0/1       ErrImagePull   0          11s
^C%
$ ./cluster/kubectl.sh describe pod trusttest
Name:       trusttest
Namespace:  default
Node:       127.0.0.1/127.0.0.1
Start Time: Wed, 17 Aug 2016 08:57:31 +0200
Labels:     app=trusttest
Status:     Pending
IP:     172.17.0.2
Controllers:    <none>
Containers:
  trusttest:
    Container ID:   
    Image:      docker/trusttest:latest
    Image ID:       
    Port:       
    State:      Waiting
      Reason:       ImagePullBackOff
    Ready:      False
    Restart Count:  0
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-ta5lj (ro)
    Environment Variables:  <none>
Conditions:
  Type      Status
  Initialized   True 
  Ready     False 
  PodScheduled  True 
Volumes:
  default-token-ta5lj:
    Type:   Secret (a volume populated by a Secret)
    SecretName: default-token-ta5lj
QoS Class:  BestEffort
Tolerations:    <none>
Events:
  FirstSeen LastSeen    Count   From            SubobjectPath           Type        Reason      Message
  --------- --------    -----   ----            -------------           --------    ------      -------
  2m        2m      1   {default-scheduler }                    Normal      Scheduled   Successfully assigned trusttest to 127.0.0.1
  2m        46s     4   {kubelet 127.0.0.1} spec.containers{trusttest}  Normal      Pulling     pulling image "docker/trusttest:latest"
  2m        44s     4   {kubelet 127.0.0.1} spec.containers{trusttest}  Warning     Failed      Failed to pull image "docker/trusttest:latest": image pull failed for unknown error
  2m        44s     4   {kubelet 127.0.0.1}                 Warning     FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "trusttest" with ErrImagePull: "image pull failed for unknown error"

  2m    4s  8   {kubelet 127.0.0.1} spec.containers{trusttest}  Normal  BackOff     Back-off pulling image "docker/trusttest:latest"
  2m    4s  8   {kubelet 127.0.0.1}                 Warning FailedSync  Error syncing pod, skipping: failed to "StartContainer" for "trusttest" with ImagePullBackOff: "Back-off pulling image \"docker/trusttest:latest\""

Maybe we should make this error more explicit in case of image trust error, but the trust mechanism itself works.

@erictune
Copy link
Member

Cool demonstration that DCT is working, @nhlfr !

Closing since I think that answers the original question.

@zhouhaibing089
Copy link
Contributor Author

Docker content trust is one layer of image signing, what I would like to raise here should be more generic, for example, we may sign the image when the CI build a release and then with some way, we want to ensure our workload would only run images from those images signed by CI, thus people could not run any intermediate build on production, I am not sure I made my statements clear, basically I want an ability to check something before it get run but after it get pulled..

@erictune
Copy link
Member

You can make it so that only CI has permission to push to a certain repository, and then use image review (#27129) to require images from that repository.

Or you can have CI write out just the signatures of the image SHAs to some storage bucket, and then use image review (#27129) to require all images that are run have SHA that is in that storage bucket.

Or you could write a docker auth plugin to verify the signature at the time that the image is run:
https://docs.docker.com/engine/extend/plugins_authorization/

@nhlfr
Copy link

nhlfr commented Aug 19, 2016

Agree with @erictune and I think also that any kind verifying images should be done by Docker, not k8s.

@zhouhaibing089
Copy link
Contributor Author

@erictune IIUC, the imagereview concept is something that verifies images on the fly, that's to say, the images need not to be pulled firstly.

You can make it so that only CI has permission to push to a certain repository.

CI generally creates many intermediate builds, It would be unexpected to run those images, if I want to strictly separate them, then I have to push intermediate builds and release build to different repositories, and only this way, I can say only images from the given repositories are allowed to run on production.

Or you can have CI write out just the signatures of the image SHAs to some storage bucket

That is interesting, but without the image get pulled, how can I get the image SHA.

Or you could write a docker auth plugin to verify the signature at the time that the image is run.

Thanks for this tip.

@zhouhaibing089
Copy link
Contributor Author

@nhlfr I partly agree with your idea.

Just to compare with Openstack, libvirt does not necessary care about all the image verification, I think it is true for docker as well. Docker content trust enables some verification to say docker daemon would only run the images with the correct signature, it is something common, but let's say image signing, I do not think docker community would support this(k8s does not necessary support this as well, but nice to have an interface to make it implementable by downstream, I am not sure whether the plugins authorization that @erictune mentioned is the interface that docker exposed, will to get some study there). what do you think?

@erictune
Copy link
Member

Or you can have CI write out just the signatures of the image SHAs to some storage bucket

That is interesting, but without the image get pulled, how can I get the image SHA.

You don't need to pull the image. You just need to check the image's manifest. You can get just the manifest from the Docker Registry API, using the image name and tag, and the manifest includes the SHA. This works well if you can treat tags as immutable.

@zhouhaibing089
Copy link
Contributor Author

zhouhaibing089 commented Aug 25, 2016

@erictune Uh, that makes sense, just one more question, what if the image is private, and the call to the api may get the unauthorized response. considering such cases, some verification may be problematic.

@erictune
Copy link
Member

That is a pickle.

@zhouhaibing089
Copy link
Contributor Author

zhouhaibing089 commented Aug 26, 2016

@erictune yeah, it is, considering k8s has the pull secrets so that k8s is able to run some private images, to make this functionality complete, we may add the secret in webhook call as well, so the data would be:

  1. ImageName
  2. Annotations
  3. Pull Secrets

WDYT?

@zhouhaibing089
Copy link
Contributor Author

Ref: #31524

@erictune
Copy link
Member

I commented on that issue.

@aurcioli-handy
Copy link

It's unclear to me how DCT is supported in Kubernetes. When DCT is enabled Docker will only pull signed images. This is not the default behavior of Kubernetes. Is there a way to turn on DCT per deployment and specify Notary servers to use?

@gambol99
Copy link
Contributor

@nhlfr .. I have to agree with @aurcioli-handy ... it's not clear how kubernetes supports trusted images via notary. There doesn't to be any daemon option on docker to enable checking trust by default and the only options i can see are via environment variable (DOCKER_CONTENT_TRUST) or --disable-content-trust on the docker pull. I'm guessing it's some option passed to daemon via cli. The example you gave ran without error on our cluster and it doesn't suggest a means of enabling trust or setting the notary server url ... Perhaps i'm just missing something!!

@u2takey
Copy link
Contributor

u2takey commented Jun 29, 2017

@erictune i have some question with @gambol99 and @aurcioli-handy , would you pls help to clarify? plus: I cannot repeat that "Cool demonstration that DCT is working" by @nhlfr .

@lucab
Copy link
Contributor

lucab commented Jul 25, 2017

For reference, the demo in @nhlfr comment is completely unrelated to Content Trust because docker/trusttest:latest is not a signed tag (notary.docker.io does not have trust data for docker.io/docker/trusttest).

@unullmass
Copy link

Is there a way to pass Docker Content Trust preferences to the Docker Engine on the K8S Minions via the Image Policy settings in the pod yaml?

@wu105
Copy link

wu105 commented Oct 17, 2018

This is somewhat similar to image pull secret, and might take a similar form, e.g., image verification config map or secret.

In my humble opinion, we would need the following:

  • cluster-wide option, namespace-wide option, or container image level option equivalent to docker client environment variable DOCKER_CONTENT_TRUST=1
  • cluster-wide, namespace-wide, or per container image docker notary key management for verifying images
  • container level options similar to the docker client option --disable-content-trust

I agree that the dirty job of verifying the images should be done by docker, but kubernetes somehow has to pass the information to docker for it to do the job. We need to be mindful that the information needed to verify images is image specific and used on the node happened to run a image, and the k8s user usually has no direct access to the nodes to configure it on demand.

@dims
Copy link
Member

dims commented Oct 17, 2018

@wu105 - somehow has to pass the information to docker ... do you see support in Docker public API to do this sort of stuff?

@dims dims reopened this Oct 17, 2018
@dims
Copy link
Member

dims commented Oct 17, 2018

long-term-issue (note to self)

@wu105
Copy link

wu105 commented Oct 18, 2018

@dims Is "support in Docker public API" a kubernetes feature? At this moment, I am looking for how to configure kuibernetes nodes (statically) so that they will verify docker images when starting pods/containers.

My goal is to set environment variable DOCKER_CONTENT_TRUST=1 when kubernetes calls docker to pull images or run images, thus docker would do its thing on DCT. My problems is that I cannot find a way to put the environment variable in docker configuration files, thus I have to set the environment variable somewhere. Kubelet is supposed to be the one to call docker, but setting the environment variable DOCKER_CONTENT_TRUST=1 for the kubelet process on the relevant node has no effect.

Creating a docker plugin for this seems an overkill.

@unullmass
Copy link

It's been a few months since I posted here, but I noticed the recent comments and wanted to share our experience with this requirement.

We enforced the DCT check on the worker node using a Docker Engine plugin. To summarize:

  1. We forked this implementation of the Image Authorization Plugin (which restricts Docker images from which containers may be spun up to a whitelist)

  2. Run the check only on container/create requests. All other requests to dockerd are passed through.

  3. Added a fork-exec for a docker pull with Docker Content Trust Enabled (DOCKER_CONTENT_TRUST=1). We also added configuration to the plugin to ensure that images from private registries may also be mapped to private Notary instances (DOCKER_CONTENT_TRUST_SERVER).

  4. If the docker pull (w/ DCT) responds with an integrity hash and a non-zero return code, we can safely assume that the image is trusted and the plugin responds with a request Authorized else the request Not Authorized

Caveats:

  • We haven't optimized for performance by caching the Notary check however, it is safer to check each and every time a container is launched since we can never be sure when any of the integrity checks on the Notary side can fail.

  • This will cause the Image Pull policy on pods to be ignored (for IfNotPresent and Never) since the image must be pulled each and every time a container is launched on the K8s worker node.

Feel free to reach out in case you have any queries.

@dims
Copy link
Member

dims commented Oct 18, 2018

@unullmass which of these steps was a code change in the kubernetes repository? looks like the changes were mainly around how/what you configure docker daemon with. peeking at [1]

Right?

[1] https://github.com/unullmass/img-authz-plugin

@trishankatdatadog
Copy link

I agree 100%: it would be nice to have k8s at least pass the relevant information to docker to support DCT. It's fairly surprising that this cannot be easily done right now.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2019
@sftim
Copy link
Contributor

sftim commented Oct 24, 2019

Still wanted
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2019
@wu105
Copy link

wu105 commented Oct 24, 2019

This issue was openned in 2016, but it is still not clear to me what direction we are going on this.

If we are using Docker client directly, to have images' signature verified, all we need to do are the following:

  • define a bunch of environment variables such as DOCKER_CONTENT_TRUST

  • make sure that a bunch of relevant files exist, usually under .docker and .notary

Then we can proceed with docker client as usual otherwise.
The docker client will determine when and how to verify the images, and will print additional information regarding the verification results as fit. There is no need to reconfigure or restart the docker daemon or directly telling the docker client how to interact with the notary, registry, and the docker daemon differently.

The kubernetes support for verifying images' signature probably should allow the user to pass the above two, i.e., a collection of environment variables and a collection of files to the docker client, specified similar to image pulling secrets. Kubernetes should then be responsible to make those available to the docker client, in ways also similar to image pulling secrets, invoke the docker client as usual otherwise, and then log the content trust related output from the docker client. Should this be the way we understand and implement the kubernetes support for image signature verification?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 22, 2020
@wu105
Copy link

wu105 commented Jan 23, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 23, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 22, 2020
@metral
Copy link
Contributor

metral commented May 12, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 12, 2020
@sftim
Copy link
Contributor

sftim commented May 30, 2020

When documenting this, bear in mind the content guidance on 3rd party projects and content.

@yuzp1996
Copy link

DCT is a very good function for everyone, I think k8s will be more security if it can use this function with DCT

@trishankatdatadog
Copy link

I understand that the pandemic has thrown a wrench into everything, but I'd be happy to see at least a GPG signature.

@jlk
Copy link

jlk commented Jul 30, 2020

The last time I tried to manually validate a DCT-signed image, I was unable to. It's easy to sign an image with DCT, but that's of little value when you can't identify and validate that signature. So with that said I'd personally give DCT on k8s a -1.

GPG is way more interesting - long a recognized 3rd party standard, well documented, fairly easy to use. rkt had (has?) solid support for it, and IMHO is the correct way forward.

@trishankatdatadog
Copy link

GPG is way more interesting - long a recognized 3rd party standard, well documented, fairly easy to use. rkt had (has?) solid support for it, and IMHO is the correct way forward.

My bad. I confused this with the issue of signing k8s binaries in the first place

As for signing container images themselves, there should be nothing against supporting verification of GPG signatures, but it is 2020, and we can certainly do much better. DCT/Notary-v1 was much better than GPG in terms of security, but not usability. We are discussing the Notary-v2 project on GitHub and CNCF Slack, and welcome more participants there.

@xopham
Copy link

xopham commented Aug 9, 2020

We ran into the same issue to ensure integrity and authenticity of docker images.

Since we saw this issue in several projects and could not find a proper solution, we build an admission controller that integrates image signature verification via Notary (essentially very similar to DCT) and some extra features like allow listing. Project is available here: https://github.com/sse-secure-systems/connaisseur

Hope this helps and happy about feedback!

@trishankatdatadog
Copy link

Hope this helps and happy about feedback!

Very interesting! Have you looked into Portieris?

In the meantime, you might be interested in our Notary-v2 GitHub repo and Slack channel. If that becomes a widely-adopted standard, we might be able to convince k8s to use it...

@unullmass
Copy link

unullmass commented Aug 10, 2020 via email

@xopham
Copy link

xopham commented Aug 10, 2020

@trishankatdatadog thanks!
We checked out Portieris and I think it's a great solution. Only roadblock seemed to be the lack of support for other registries which was critical for us: IBM/portieris#51

Excited to see the work on Notary-v2! So much going on in the field :-) Might see if we can get involved there

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 8, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 8, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubectl lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests