New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added image-policy proposal #27129

Merged
merged 1 commit into from Aug 5, 2016

Conversation

Projects
None yet
@erictune
Member

erictune commented Jun 9, 2016

Add proposal for image policy.

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune
Member

erictune commented Jun 9, 2016

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune

erictune Jun 9, 2016

Member

@philips it might make sense to have https://github.com/coreos/clair be a backend for this image policy webhook, or something like that? Can you at-mention the right people from Clair?

Also @ericchiang because it is a type of authorization, but intentionally not in RBAC.

Member

erictune commented Jun 9, 2016

@philips it might make sense to have https://github.com/coreos/clair be a backend for this image policy webhook, or something like that? Can you at-mention the right people from Clair?

Also @ericchiang because it is a type of authorization, but intentionally not in RBAC.

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune
Member

erictune commented Jun 9, 2016

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune
Member

erictune commented Jun 9, 2016

@ericchiang

This comment has been minimized.

Show comment
Hide comment
@ericchiang
Member

ericchiang commented Jun 9, 2016

The ReplicaSet, or other controller, is responsible for recognizing when a 403 has happened
(whether due to user not having permission due to bad image, or some other permission reason)
and throttling itself and surfacing the error in a way that CLIs and UIs can show to the user.

This comment has been minimized.

@smarterclayton

smarterclayton Jun 9, 2016

Contributor

I'm starting to feel like we're not doing enough to actually solve the usability problem here - the approach you describe is correct (RS needs to surface that info) but we're doing a terrible job of it. For instance... should RS have a condition "CreationForbidden": "true" with a reason "ImageRejected"? Should the deployment also have a condition surfaced?

Right now, we see this all the time in OpenShift (by virtue of security policy) and end users really suffer without this info being surfaced (in quota, in policy, and in crash looping pods).

@smarterclayton

smarterclayton Jun 9, 2016

Contributor

I'm starting to feel like we're not doing enough to actually solve the usability problem here - the approach you describe is correct (RS needs to surface that info) but we're doing a terrible job of it. For instance... should RS have a condition "CreationForbidden": "true" with a reason "ImageRejected"? Should the deployment also have a condition surfaced?

Right now, we see this all the time in OpenShift (by virtue of security policy) and end users really suffer without this info being surfaced (in quota, in policy, and in crash looping pods).

This comment has been minimized.

@erictune
@erictune

This comment has been minimized.

@kargakis

kargakis Jun 9, 2016

Member

I'm starting to feel like we're not doing enough to actually solve the usability problem here - the approach you describe is correct (RS needs to surface that info) but we're doing a terrible job of it. For instance... should RS have a condition "CreationForbidden": "true" with a reason "ImageRejected"? Should the deployment also have a condition surfaced?

Right now, we see this all the time in OpenShift (by virtue of security policy) and end users really suffer without this info being surfaced (in quota, in policy, and in crash looping pods).

Conditions are one thing that may or may not help here but I believe we already have a mechanism for reporting failures, albeit we are doing a bad job with it: events. Events are nice but fetching exactly those you are interested in is impossible currently. It feels like we should have events as a subresource to all objects or something. #11994 is related.

@kargakis

kargakis Jun 9, 2016

Member

I'm starting to feel like we're not doing enough to actually solve the usability problem here - the approach you describe is correct (RS needs to surface that info) but we're doing a terrible job of it. For instance... should RS have a condition "CreationForbidden": "true" with a reason "ImageRejected"? Should the deployment also have a condition surfaced?

Right now, we see this all the time in OpenShift (by virtue of security policy) and end users really suffer without this info being surfaced (in quota, in policy, and in crash looping pods).

Conditions are one thing that may or may not help here but I believe we already have a mechanism for reporting failures, albeit we are doing a bad job with it: events. Events are nice but fetching exactly those you are interested in is impossible currently. It feels like we should have events as a subresource to all objects or something. #11994 is related.

This comment has been minimized.

@smarterclayton

smarterclayton Jun 10, 2016

Contributor

Found your answer later in the issue. I'm forgetting which proposals I've commented on at this point.

@smarterclayton

smarterclayton Jun 10, 2016

Contributor

Found your answer later in the issue. I'm forgetting which proposals I've commented on at this point.

This comment has been minimized.

@soltysh

soltysh Jul 20, 2016

Contributor

I'd like to generalize this paragraph to be applicable to all controllers. Deplyments and ReplicaSets are not the only ones currently. But I agree with Clayton and Michalis here that we need to make it more obvious and Conditions are one of the possibilities here.

@soltysh

soltysh Jul 20, 2016

Contributor

I'd like to generalize this paragraph to be applicable to all controllers. Deplyments and ReplicaSets are not the only ones currently. But I agree with Clayton and Michalis here that we need to make it more obvious and Conditions are one of the possibilities here.

This comment has been minimized.

@erictune

erictune Jul 22, 2016

Member

Linked this discussion from #22298

@erictune

erictune Jul 22, 2016

Member

Linked this discussion from #22298

## Ubernetes
If two clusters share an image policy backend, then they will have the same policies.

This comment has been minimized.

@gtank

gtank Jun 9, 2016

Is the intent to synchronize the admission caches in the federated/ubernetes cases?

@gtank

gtank Jun 9, 2016

Is the intent to synchronize the admission caches in the federated/ubernetes cases?

This comment has been minimized.

@erictune

erictune Jun 9, 2016

Member

no.

@erictune
We will wait and see how much demand there is for closing this hole. If the community demands a solution,
we may suggest one of these:
1. Use a backend that refuses to accept images that are specified with tags, and require users to resolve to IDs

This comment has been minimized.

@gtank

gtank Jun 9, 2016

Always forcing users to resolve IDs would allow backends to interact more effectively with content trust measures. With just the tag, there's no binding between what was actually deployed and what the backend sees. cc @ecordell

@gtank

gtank Jun 9, 2016

Always forcing users to resolve IDs would allow backends to interact more effectively with content trust measures. With just the tag, there's no binding between what was actually deployed and what the backend sees. cc @ecordell

This comment has been minimized.

@smarterclayton

smarterclayton Jun 9, 2016

Contributor

Resolving IDs before they are put into pods / deployments / rcs is the best
outcome. Doing that resolution in the admission control is fraught (there
should be another admission controller that does the transformation, if you
really want that)

On Thu, Jun 9, 2016 at 4:53 PM, George Tankersley notifications@github.com
wrote:

In docs/proposals/image-provenance.md
#27129 (comment)
:

+## Image tags and IDs
+
+Image tags are like: myrepo/myimage:v1.
+
+Image IDs are like: myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed.
+You can see image IDs with docker images --no-trunc.
+
+The Backend needs to be able to resolve tags to IDs (by talking to the images repo).
+If the Backend resolves tags to IDs, there is some risk that the tag-to-ID mapping will be
+modified after approval by the Backend, but before Kubelet pulls the image. We will not address this
+race condition at this time.
+
+We will wait and see how much demand there is for closing this hole. If the community demands a solution,
+we may suggest one of these:
+
+1. Use a backend that refuses to accept images that are specified with tags, and require users to resolve to IDs

Forcing users to resolve IDs would allow a backend to interact more
effectively with content trust measures. With just the tag, there's no
binding between what was actually deployed and what the backend sees. You
need to somehow supply an account of the data. cc @ecordell
https://github.com/ecordell


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubernetes/pull/27129/files/790ab4954039f8d601b925896c01962face3765f#r66519606,
or mute the thread
https://github.com/notifications/unsubscribe/ABG_pykCE46gTf8z3jGttIEODyKkW9UPks5qKH1LgaJpZM4IyI6J
.

@smarterclayton

smarterclayton Jun 9, 2016

Contributor

Resolving IDs before they are put into pods / deployments / rcs is the best
outcome. Doing that resolution in the admission control is fraught (there
should be another admission controller that does the transformation, if you
really want that)

On Thu, Jun 9, 2016 at 4:53 PM, George Tankersley notifications@github.com
wrote:

In docs/proposals/image-provenance.md
#27129 (comment)
:

+## Image tags and IDs
+
+Image tags are like: myrepo/myimage:v1.
+
+Image IDs are like: myrepo/myimage@sha256:beb6bd6a68f114c1dc2ea4b28db81bdf91de202a9014972bec5e4d9171d90ed.
+You can see image IDs with docker images --no-trunc.
+
+The Backend needs to be able to resolve tags to IDs (by talking to the images repo).
+If the Backend resolves tags to IDs, there is some risk that the tag-to-ID mapping will be
+modified after approval by the Backend, but before Kubelet pulls the image. We will not address this
+race condition at this time.
+
+We will wait and see how much demand there is for closing this hole. If the community demands a solution,
+we may suggest one of these:
+
+1. Use a backend that refuses to accept images that are specified with tags, and require users to resolve to IDs

Forcing users to resolve IDs would allow a backend to interact more
effectively with content trust measures. With just the tag, there's no
binding between what was actually deployed and what the backend sees. You
need to somehow supply an account of the data. cc @ecordell
https://github.com/ecordell


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubernetes/pull/27129/files/790ab4954039f8d601b925896c01962face3765f#r66519606,
or mute the thread
https://github.com/notifications/unsubscribe/ABG_pykCE46gTf8z3jGttIEODyKkW9UPks5qKH1LgaJpZM4IyI6J
.

This comment has been minimized.

@ecordell

ecordell Jun 9, 2016

Contributor

Agreed; any signature verification doesn't mean much if you don't verify that the image id matches the content

@ecordell

ecordell Jun 9, 2016

Contributor

Agreed; any signature verification doesn't mean much if you don't verify that the image id matches the content

This comment has been minimized.

@erictune

erictune Jul 20, 2016

Member

I just commented on this in https://github.com/kubernetes/kubernetes/pull/27129/files/4a60be831efce93d7e210df47d79e7c18d5d13c2#r71607506

I think it makes sense to either map tag to SHA in kubectl, in CICD, or after the fact in Kubelet. I agree the admission controller is a bad place to map tag to SHA. I think admission is a fine place to require image names that use SHAs.

@erictune

erictune Jul 20, 2016

Member

I just commented on this in https://github.com/kubernetes/kubernetes/pull/27129/files/4a60be831efce93d7e210df47d79e7c18d5d13c2#r71607506

I think it makes sense to either map tag to SHA in kubectl, in CICD, or after the fact in Kubelet. I agree the admission controller is a bad place to map tag to SHA. I think admission is a fine place to require image names that use SHAs.

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune
Member

erictune commented Jun 13, 2016

xref: #22888

## Admission controller
An `ImagePolicyWebhook` admission controller will be written. The admission controller examines all pod objects which are

This comment has been minimized.

@bgrant0607

bgrant0607 Jun 22, 2016

Member

Is the webhook going to be tried until success? How would it distinguish retryable failure from permanent failure?

@bgrant0607

bgrant0607 Jun 22, 2016

Member

Is the webhook going to be tried until success? How would it distinguish retryable failure from permanent failure?

This comment has been minimized.

@erictune

erictune Jul 23, 2016

Member

The admission controller will admit if the webhook times out.

@erictune

erictune Jul 23, 2016

Member

The admission controller will admit if the webhook times out.

This comment has been minimized.

@Q-Lee

Q-Lee Jul 23, 2016

Contributor

Will the admin be able to set a policy for a namespace (e.g., fail-open/fail-closed)?

@Q-Lee

Q-Lee Jul 23, 2016

Contributor

Will the admin be able to set a policy for a namespace (e.g., fail-open/fail-closed)?

This comment has been minimized.

@deads2k

deads2k Jul 25, 2016

Contributor

Will this be a generic webhook for any admission plugin? I'd like to see a generic one and it seems like the work here would be about the same.

@deads2k

deads2k Jul 25, 2016

Contributor

Will this be a generic webhook for any admission plugin? I'd like to see a generic one and it seems like the work here would be about the same.

This comment has been minimized.

@erictune

erictune Jul 25, 2016

Member

It will not be a generic webhook. A generic webhook would need a lot more discussion:

  • a generic webhook needs to touch all objects, not just pods. So it won't have a fixed schema
  • a generic webhook client needs to ignore kinds it doesn't care about, or the apiserver needs to know which backends care about which kinds
  • it exposes our whole API to a webhook without giving us (the project) any chance to review or understand how it is being used.
  • because we don't know which fields of an object are inspected by the backend, caching is not effective. Sending fewer fields allows caching.
  • sending fewer fields makes it possible to rev the version of the webhook request slower than the version of our internal obejcts (e.g. pod v2 could still use imageReview v1.)
  • probably lots more reasons.
@erictune

erictune Jul 25, 2016

Member

It will not be a generic webhook. A generic webhook would need a lot more discussion:

  • a generic webhook needs to touch all objects, not just pods. So it won't have a fixed schema
  • a generic webhook client needs to ignore kinds it doesn't care about, or the apiserver needs to know which backends care about which kinds
  • it exposes our whole API to a webhook without giving us (the project) any chance to review or understand how it is being used.
  • because we don't know which fields of an object are inspected by the backend, caching is not effective. Sending fewer fields allows caching.
  • sending fewer fields makes it possible to rev the version of the webhook request slower than the version of our internal obejcts (e.g. pod v2 could still use imageReview v1.)
  • probably lots more reasons.

This comment has been minimized.

@erictune

erictune Jul 25, 2016

Member

Added section about this in Alternatives section.

@erictune

erictune Jul 25, 2016

Member

Added section about this in Alternatives section.

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune

erictune Jul 23, 2016

Member

I think I fixed most of the comments in my last push.

I think @smarterclayton is pretty busy.

If I can get two LGTMs from the set {@soltysh, @philips, @Q-Lee, @ecordell } then I am going to take it as approval and I will merge it.

Member

erictune commented Jul 23, 2016

I think I fixed most of the comments in my last push.

I think @smarterclayton is pretty busy.

If I can get two LGTMs from the set {@soltysh, @philips, @Q-Lee, @ecordell } then I am going to take it as approval and I will merge it.

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune

erictune Jul 23, 2016

Member

@fabioy this is the proposal for image admission controller that we talked about.

Member

erictune commented Jul 23, 2016

@fabioy this is the proposal for image admission controller that we talked about.

@philips philips referenced this pull request Jul 23, 2016

Closed

Container Image Policy #59

6 of 21 tasks complete
An `ImagePolicyWebhook` admission controller will be written. The admission controller examines all pod objects which are
created or updated. It can either admit the pod, or reject it. If it is rejected, the request sees a `403 FORBIDDEN`
The admission controller code will go in `plugin/pkg/admission/imagepolicy`.

This comment has been minimized.

@smarterclayton

smarterclayton Jul 25, 2016

Contributor

I would like for the admission controller to take an interface for "check decision" rather than embedding all the client logic in it. That would allow alternate admission controller implementations to be more easily implemented. As an example, we're trying to build composable chunks of admission logic for policy like this that can be reused in other contexts. Things like authorizer and authentication have succeeded pretty well at this (@liggitt has done some crazy authenticator wrappers that work cleanly). I would like to generally have our policy decision steps behind clean interfaces (in this case, an interface that answers the question about an image and it being accepted and mirrors the ImageReview object) that can be composed later on.

@smarterclayton

smarterclayton Jul 25, 2016

Contributor

I would like for the admission controller to take an interface for "check decision" rather than embedding all the client logic in it. That would allow alternate admission controller implementations to be more easily implemented. As an example, we're trying to build composable chunks of admission logic for policy like this that can be reused in other contexts. Things like authorizer and authentication have succeeded pretty well at this (@liggitt has done some crazy authenticator wrappers that work cleanly). I would like to generally have our policy decision steps behind clean interfaces (in this case, an interface that answers the question about an image and it being accepted and mirrors the ImageReview object) that can be composed later on.

This comment has been minimized.

@deads2k

deads2k Jul 25, 2016

Contributor

I would like for the admission controller to take an interface for "check decision" rather than embedding all the client logic in it.

You're saying you don't like the webhook mechanism suggested, that you want a generic webhook like I asked above above, or that you want the reference webhook impl to accept what amounts to an admission.Interface?

@deads2k

deads2k Jul 25, 2016

Contributor

I would like for the admission controller to take an interface for "check decision" rather than embedding all the client logic in it.

You're saying you don't like the webhook mechanism suggested, that you want a generic webhook like I asked above above, or that you want the reference webhook impl to accept what amounts to an admission.Interface?

This comment has been minimized.

@smarterclayton

smarterclayton Jul 25, 2016

Contributor

On Jul 25, 2016, at 3:21 PM, David Eads notifications@github.com wrote:

In docs/proposals/image-provenance.md
#27129 (comment):

  • reduces latency and allows short outages of the backend to be tolerated.

+Detailed discussion in Ensuring only images are from approved sources are run.
+
+# Implementation
+
+A new admission controller will be added. That will be the only change.
+
+## Admission controller
+
+An ImagePolicyWebhook admission controller will be written. The admission controller examines all pod objects which are
+created or updated. It can either admit the pod, or reject it. If it is rejected, the request sees a 403 FORBIDDEN
+
+The admission controller code will go in plugin/pkg/admission/imagepolicy.

I would like for the admission controller to take an interface for "check
decision" rather than embedding all the client logic in it.

You're saying you don't like the webhook mechanism suggested, that you want
a generic webhook like I asked above above, or that you want the reference
webhook impl to accept what amounts to and admission.Interface?

The latter. Authorizer and Admission interfaces are very successful.
Would like to try to define the equivalents for other policy equally well.

@smarterclayton

smarterclayton Jul 25, 2016

Contributor

On Jul 25, 2016, at 3:21 PM, David Eads notifications@github.com wrote:

In docs/proposals/image-provenance.md
#27129 (comment):

  • reduces latency and allows short outages of the backend to be tolerated.

+Detailed discussion in Ensuring only images are from approved sources are run.
+
+# Implementation
+
+A new admission controller will be added. That will be the only change.
+
+## Admission controller
+
+An ImagePolicyWebhook admission controller will be written. The admission controller examines all pod objects which are
+created or updated. It can either admit the pod, or reject it. If it is rejected, the request sees a 403 FORBIDDEN
+
+The admission controller code will go in plugin/pkg/admission/imagepolicy.

I would like for the admission controller to take an interface for "check
decision" rather than embedding all the client logic in it.

You're saying you don't like the webhook mechanism suggested, that you want
a generic webhook like I asked above above, or that you want the reference
webhook impl to accept what amounts to and admission.Interface?

The latter. Authorizer and Admission interfaces are very successful.
Would like to try to define the equivalents for other policy equally well.

// ImageReviewSpec is a description of the pod creation request.
type ImageReviewSpec struct {
// Containers is a list of a subset of the information in each container of the Pod being created.
Containers []ImageReviewContainerSpec

This comment has been minimized.

@smarterclayton

smarterclayton Jul 25, 2016

Contributor

Unfortunately you also need to accept init containers. I would recommend creating a nested struct that is similar to pod template that has the subset of info in it, and make that hierarchal rather than flattened, i.e.:

type ImageReviewPodTemplate struct {
  Metadata ImageReviewObjectMeta
  Spec ImageReviewPodSpec
}

If we can preserve the same hierarchy as a PodTemplate, that makes automated tools easier in the future (we might take the pod object as unstructured and whitelist the things that are included in a generic fashion).

@smarterclayton

smarterclayton Jul 25, 2016

Contributor

Unfortunately you also need to accept init containers. I would recommend creating a nested struct that is similar to pod template that has the subset of info in it, and make that hierarchal rather than flattened, i.e.:

type ImageReviewPodTemplate struct {
  Metadata ImageReviewObjectMeta
  Spec ImageReviewPodSpec
}

If we can preserve the same hierarchy as a PodTemplate, that makes automated tools easier in the future (we might take the pod object as unstructured and whitelist the things that are included in a generic fashion).

This comment has been minimized.

@smarterclayton

smarterclayton Jul 25, 2016

Contributor

It also establishes a pattern for future examples of this where subsets of data are returned.

@smarterclayton

smarterclayton Jul 25, 2016

Contributor

It also establishes a pattern for future examples of this where subsets of data are returned.

This comment has been minimized.

@erictune

erictune Jul 25, 2016

Member

Is it settled where in the PodSpec that init containers will live?

@erictune

erictune Jul 25, 2016

Member

Is it settled where in the PodSpec that init containers will live?

This comment has been minimized.

@erictune

erictune Jul 25, 2016

Member

Also, if we do what you said, it couples the versioning of Pods to the versioning of the ImageReview api. Is that desirable?

@erictune

erictune Jul 25, 2016

Member

Also, if we do what you said, it couples the versioning of Pods to the versioning of the ImageReview api. Is that desirable?

This comment has been minimized.

@smarterclayton

smarterclayton Jul 26, 2016

Contributor
@smarterclayton

smarterclayton via email Jul 26, 2016

Contributor

This comment has been minimized.

@smarterclayton

smarterclayton Jul 26, 2016

Contributor

Possibly not, although creating a new structure to learn is causing
API drift. I would expect this API to evolve to be consistent with
pods in the long term (v1 pods to vX ImagePolicyReview)

@smarterclayton

smarterclayton Jul 26, 2016

Contributor

Possibly not, although creating a new structure to learn is causing
API drift. I would expect this API to evolve to be consistent with
pods in the long term (v1 pods to vX ImagePolicyReview)

This comment has been minimized.

@Q-Lee

Q-Lee Aug 4, 2016

Contributor

I'm of the opinion that instead of mimicking pod layout, we should eliminate the container list altogether. The goal here is to establish a chain of trust for containers, and not to enforce pod-level policies. If you create a pod with 5 containers, then you make 5 requests to the backend.

@Q-Lee

Q-Lee Aug 4, 2016

Contributor

I'm of the opinion that instead of mimicking pod layout, we should eliminate the container list altogether. The goal here is to establish a chain of trust for containers, and not to enforce pod-level policies. If you create a pod with 5 containers, then you make 5 requests to the backend.

This comment has been minimized.

@deads2k

deads2k Aug 4, 2016

Contributor

I'm of the opinion that instead of mimicking pod layout, we should eliminate the container list altogether. The goal here is to establish a chain of trust for containers, and not to enforce pod-level policies. If you create a pod with 5 containers, then you make 5 requests to the backend.

I think that it's likely we'll end up doing both. Given the current power of PSP and it's ability to describe who can request which powers for a pod/container, it seems likely that the decision about whether a particular image is allowed may be affected by the same or a similar policy.

I'm not suggesting that we add that level of complication now, but designing the structure for future expansion seems like a reasonable thing to do and this would be a way to do it.

Also, remote callouts are expensive and if we already have all the data ready, validating it all at once seems pretty reasonable.

@deads2k

deads2k Aug 4, 2016

Contributor

I'm of the opinion that instead of mimicking pod layout, we should eliminate the container list altogether. The goal here is to establish a chain of trust for containers, and not to enforce pod-level policies. If you create a pod with 5 containers, then you make 5 requests to the backend.

I think that it's likely we'll end up doing both. Given the current power of PSP and it's ability to describe who can request which powers for a pod/container, it seems likely that the decision about whether a particular image is allowed may be affected by the same or a similar policy.

I'm not suggesting that we add that level of complication now, but designing the structure for future expansion seems like a reasonable thing to do and this would be a way to do it.

Also, remote callouts are expensive and if we already have all the data ready, validating it all at once seems pretty reasonable.

This comment has been minimized.

@Q-Lee

Q-Lee Aug 4, 2016

Contributor

Exactly, we have PSP for pod level control. The purpose here is to establish a chain of trust from source code to deployment.

A small scaling factor from pods to containers is nothing to bat an eye at.

@Q-Lee

Q-Lee Aug 4, 2016

Contributor

Exactly, we have PSP for pod level control. The purpose here is to establish a chain of trust from source code to deployment.

A small scaling factor from pods to containers is nothing to bat an eye at.

* Block creation of pods that would cause "unapproved" images to run.
* Make it easy for users or partners to build "image provenance checkers" which check whether images are "approved".
* We expect there will be multiple implementations.
* Allow users to request an "override" of the policy in a convenient way (subject to the override being allowed).

This comment has been minimized.

@soltysh

soltysh Jul 25, 2016

Contributor

Is the override be available to all users? Or this will be tied to specific authz?

@soltysh

soltysh Jul 25, 2016

Contributor

Is the override be available to all users? Or this will be tied to specific authz?

This comment has been minimized.

@erictune

erictune Jul 25, 2016

Member

In one possible implementation, the override is available to all users, but a user who requests the override would be expected to answer to an auditor, sometime after the fact, if she requests the override.

@erictune

erictune Jul 25, 2016

Member

In one possible implementation, the override is available to all users, but a user who requests the override would be expected to answer to an auditor, sometime after the fact, if she requests the override.

@erictune

This comment has been minimized.

Show comment
Hide comment
@erictune

erictune Jul 25, 2016

Member

I've addressed most comments and updated the docs.

The open issues, as I see it, are:

  • how to handle init-containers
  • if the schema should be a subset of PodSpec, so that additional fields fall in the same places.
  • something about interfaces that I don't quite follow.
Member

erictune commented Jul 25, 2016

I've addressed most comments and updated the docs.

The open issues, as I see it, are:

  • how to handle init-containers
  • if the schema should be a subset of PodSpec, so that additional fields fall in the same places.
  • something about interfaces that I don't quite follow.
@smarterclayton

This comment has been minimized.

Show comment
Hide comment
@smarterclayton

smarterclayton Jul 26, 2016

Contributor
Contributor

smarterclayton commented Jul 26, 2016

@philips

This comment has been minimized.

Show comment
Hide comment
@philips

philips Jul 27, 2016

Contributor
Contributor

philips commented Jul 27, 2016

## Admission controller
An `ImagePolicyWebhook` admission controller will be written. The admission controller examines all pod objects which are
created or updated. It can either admit the pod, or reject it. If it is rejected, the request sees a `403 FORBIDDEN`

This comment has been minimized.

@jzelinskie

jzelinskie Jul 27, 2016

This description is a little vague in the plurality sense. I think this should work over a set of webhooks rather than one. For example, this Pull Request on GitHub has multiple webhooks that must validate it before it is merged.

@jzelinskie

jzelinskie Jul 27, 2016

This description is a little vague in the plurality sense. I think this should work over a set of webhooks rather than one. For example, this Pull Request on GitHub has multiple webhooks that must validate it before it is merged.

This comment has been minimized.

@soltysh

soltysh Jul 28, 2016

Contributor

I kinda understood this as being able to setup multiple, but having it explicit in the proposal is a good idea.

@soltysh

soltysh Jul 28, 2016

Contributor

I kinda understood this as being able to setup multiple, but having it explicit in the proposal is a good idea.

This comment has been minimized.

@deads2k

deads2k Jul 28, 2016

Contributor

This description is a little vague in the plurality sense. I think this should work over a set of webhooks rather than one. For example, this Pull Request on GitHub has multiple webhooks that must validate it before it is merged.

Seems like we could support a single callout and if someone wanted a union, they could write the union in their particular handler. That keeps our core code out of the business of deciding between the ands, ors, and trumps, which inevitably follow the "give me more than one".

I don't see an issue with making a reference impl for the webhook that can provide a simple union, but I don't think we want to bake multiples into our admission plugin.

@deads2k

deads2k Jul 28, 2016

Contributor

This description is a little vague in the plurality sense. I think this should work over a set of webhooks rather than one. For example, this Pull Request on GitHub has multiple webhooks that must validate it before it is merged.

Seems like we could support a single callout and if someone wanted a union, they could write the union in their particular handler. That keeps our core code out of the business of deciding between the ands, ors, and trumps, which inevitably follow the "give me more than one".

I don't see an issue with making a reference impl for the webhook that can provide a simple union, but I don't think we want to bake multiples into our admission plugin.

This comment has been minimized.

@soltysh

soltysh Jul 28, 2016

Contributor

I feel convinced.

@soltysh

soltysh Jul 28, 2016

Contributor

I feel convinced.

This comment has been minimized.

@jzelinskie

jzelinskie Jul 31, 2016

@deads2k It seems there's precedent in other parts of k8s for doing it the way you have described, so I agree.

@jzelinskie

jzelinskie Jul 31, 2016

@deads2k It seems there's precedent in other parts of k8s for doing it the way you have described, so I agree.

// ImageReviewContainerSpec is a description of a container within the pod creation request.
type ImageReviewContainerSpec struct {
Image string

This comment has been minimized.

@Q-Lee

Q-Lee Aug 3, 2016

Contributor

Shouldn't this be Image, ImageHash string?

@Q-Lee

Q-Lee Aug 3, 2016

Contributor

Shouldn't this be Image, ImageHash string?

This comment has been minimized.

@erictune

erictune Aug 4, 2016

Member

Images can be specified to docker, and in pod.spec.container[].image as either image:tag or image@SHA:012345679abcdef. So, this field also accepts either format.

It is up to the backend to decide if it accepts image:tag or only accepts image@SHA:012345679abcdef format. There are reasons you might chose to do either way, so this API doesn't have an opinion.

@erictune

erictune Aug 4, 2016

Member

Images can be specified to docker, and in pod.spec.container[].image as either image:tag or image@SHA:012345679abcdef. So, this field also accepts either format.

It is up to the backend to decide if it accepts image:tag or only accepts image@SHA:012345679abcdef format. There are reasons you might chose to do either way, so this API doesn't have an opinion.

@Q-Lee Q-Lee added the lgtm label Aug 4, 2016

@Q-Lee

This comment has been minimized.

Show comment
Hide comment
@Q-Lee

Q-Lee Aug 4, 2016

Contributor

This is very close. Let's put this and I'll make the remaining changes in a new, narrower PR.

Contributor

Q-Lee commented Aug 4, 2016

This is very close. Let's put this and I'll make the remaining changes in a new, narrower PR.

@k8s-merge-robot

This comment has been minimized.

Show comment
Hide comment
@k8s-merge-robot

k8s-merge-robot Aug 5, 2016

Contributor

PR changed after LGTM, removing LGTM. @erictune @philips @soltysh @Q-Lee

Contributor

k8s-merge-robot commented Aug 5, 2016

PR changed after LGTM, removing LGTM. @erictune @philips @soltysh @Q-Lee

@k8s-bot

This comment has been minimized.

Show comment
Hide comment
@k8s-bot

k8s-bot commented Aug 5, 2016

GCE e2e build/test passed for commit 9d59ae5.

@erictune erictune merged commit 6f0bc85 into kubernetes:master Aug 5, 2016

5 of 7 checks passed

Jenkins GCE Node e2e Build finished. 340 tests run, 30 skipped, 2 failed.
Details
Submit Queue Github CI tests are not green.
Details
Jenkins GCE e2e Build finished. 344 tests run, 154 skipped, 0 failed.
Details
Jenkins GKE smoke e2e Build finished. 344 tests run, 342 skipped, 0 failed.
Details
Jenkins unit/integration Build finished. 3564 tests run, 15 skipped, 0 failed.
Details
Jenkins verification Build finished.
Details
cla/google All necessary CLAs are signed

xingzhou pushed a commit to xingzhou/kubernetes that referenced this pull request Dec 15, 2016

Merge pull request #27129 from erictune/imgprov
Added image-policy proposal

@erictune erictune deleted the erictune:imgprov branch Aug 8, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment