Skip to content

Conversation

@SaranBalaji90
Copy link

@SaranBalaji90 SaranBalaji90 commented Jan 16, 2020

This is initial proposal doc for adding "out of tree" plugin support for pod admission handler. It is not currently possible to add additional validations (without changing kubelet code) before admitting the Pod. This PR provides flexibility for adding such validations without updating kubelet source code.

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory sig/node Categorizes an issue or PR as relevant to SIG Node. labels Jan 16, 2020
@k8s-ci-robot
Copy link
Contributor

Welcome @SaranBalaji90!

It looks like this is your first PR to kubernetes/enhancements 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/enhancements has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @SaranBalaji90. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jan 16, 2020
@SaranBalaji90 SaranBalaji90 changed the title Add plugin support for pod admission handler [WIP] Add plugin support for pod admission handler Jan 16, 2020
@k8s-ci-robot k8s-ci-robot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Jan 16, 2020
Copy link

@riking riking left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be integrated with the CRI?


This functionality will provide cluster admins the ability to restrict specific pods to run on subset of their worker nodes. This includes, restricting pod using host networking, host’s pid namespace, some specific volumes, etc. Even though this can be achieved through other functionalities like, pod security policies, taints and toleration features there are some caveats where pod might still end up with this subset of worker nodes.

For example, when admins uses clusters provisioned through managed service provider, then they have full access to the cluster. So they can always delete pod security policy or launch pod that tolerates any taints and get launched on these nodes.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"Cluster admins could simply set the 'node' field of a pod, bypassing any constraints enforced by a scheduler or its plugins, such as taints."

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its not clear it makes sense to protect against the cluster admin ability to circumvent security policies.


In future if kernel or docker supports any new functionalities and if not all Kubernetes worker nodes in a cluster supports that feature then we need to keep adding these pod admit handlers in kubernetes source code to validate if pod can be admitted by kubelet or not.

This KEP is to discuss if we can move these validations out of kubernetes source code to a separate plugin that runs outside kubernetes. By this way, we don’t have to update kubernetes source code for every new functionalities supported in future.
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A finished KEP describes an enhancement, and is not an invitation to discussion.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@riking sorry, meant to say this KEP is to propose changes to move some stuff out of k8s source code. Will update this along with implementation detail of whatever we decide on.

@SaranBalaji90
Copy link
Author

Should this be integrated with the CRI?

that's a good suggestion. Haven't looked at CRI in depth, will take a look at this.

Copy link
Member

@derekwaynecarr derekwaynecarr left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not adverse to additional plugins for kubelet admission that are not compiled w/ the kubelet, but I feel like the kubelet should have the function to validate the pod spec.


## Summary

Today, kubelet on the worker node performs known set of validations and decides if the pod can be allowed to execute on the node or not. Kubelet rejects the pods if any of the requirements in pod spec is not supported by docker running on the node. For example pods requiring to update specific sysctl parameters or requiring privileged escalations or using non default proc mount. Not just based on docker capabilities, kubelet rejects pod based on node capabilities as well. For example, it rejects pods which requires specific apparmor profile if its not supported by the kernel and also based on topology manager component decision.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: s/docker/container-engine to reflect cri


This functionality will provide cluster admins the ability to restrict specific pods to run on subset of their worker nodes. This includes, restricting pod using host networking, host’s pid namespace, some specific volumes, etc. Even though this can be achieved through other functionalities like, pod security policies, taints and toleration features there are some caveats where pod might still end up with this subset of worker nodes.

For example, when admins uses clusters provisioned through managed service provider, then they have full access to the cluster. So they can always delete pod security policy or launch pod that tolerates any taints and get launched on these nodes.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its not clear it makes sense to protect against the cluster admin ability to circumvent security policies.


For example, when admins uses clusters provisioned through managed service provider, then they have full access to the cluster. So they can always delete pod security policy or launch pod that tolerates any taints and get launched on these nodes.

In future if kernel or docker supports any new functionalities and if not all Kubernetes worker nodes in a cluster supports that feature then we need to keep adding these pod admit handlers in kubernetes source code to validate if pod can be admitted by kubelet or not.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are there specific scenarios you have in mind?


(I’m still working on actual implementation but wanted to put this idea out and get initial feedback before arriving at complete solution)

Initial proposal is to build something similar to networking plugin architecture where kubelet invokes CNI binary and retrieves the IP associated with pod. In the same way, we can have set of plugins defined on the node which will indicate whether kubelet can admit the pod or not. In future we can move some of the pod admit handler to its own plugin if required. For example, create one plugin specifically for docker and validate pod spec against available docker functionalities.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this inverses the problem. the expectation is that a node can satisfy the pod spec requirements. the container runtime is a detail.

@SaranBalaji90
Copy link
Author

We discussed about this during our Jan 21st sig/node meeting. Will update the KEP and bring it up again in upcoming sig/node meeting.

@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. area/provider/aws Issues or PRs related to aws provider area/provider/azure Issues or PRs related to azure provider sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/auth Categorizes an issue or PR as relevant to SIG Auth. sig/autoscaling Categorizes an issue or PR as relevant to SIG Autoscaling. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/cloud-provider Categorizes an issue or PR as relevant to SIG Cloud Provider. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. and removed size/L Denotes a PR that changes 100-499 lines, ignoring generated files. labels Apr 5, 2020
@SaranBalaji90 SaranBalaji90 force-pushed the master branch 3 times, most recently from 0965c20 to 8201200 Compare April 15, 2020 17:32
@k8s-ci-robot k8s-ci-robot added the sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. label Apr 15, 2020
@k8s-ci-robot
Copy link
Contributor

@SaranBalaji90: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-enhancements-verify aab0d07 link /test pull-enhancements-verify

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@derekwaynecarr
Copy link
Member

derekwaynecarr commented Apr 21, 2020

/assign

discussed in sig-node on 4/21, will review and reflect on potential issues.

  • what happens if a pod is rejected by this admission plugin?
  • could a status get reported on why the rejection happened?
  • similar issue has arisen with in-tree admission handlers in kubelet today
  • what is anticipated maintenance cycle of this plugin?
  • does it tie to kubelet readiness (akin to CNI)?
  • have you considered a scheduler extension instead of a node local kubelet approach?

@dashpole
Copy link
Contributor

/cc

@k8s-ci-robot k8s-ci-robot requested a review from dashpole April 21, 2020 17:56

## Proposal

The approach taken is similar to the container networking interface (CNI) plugin architecture. With CNI, kubelet invokes one or more CNI plugin binaries on the host to set up a Pod’s networking. kubelet discovers available CNI plugins by [examining](https://github.com/kubernetes/kubernetes/blob/dd5272b76f07bea60628af0bb793f3cca385bf5e/pkg/kubelet/dockershim/docker_service.go#L242) a well-known directory (`/etc/cni/net.d`) for configuration files and [loading](https://github.com/kubernetes/kubernetes/blob/dd5272b76f07bea60628af0bb793f3cca385bf5e/pkg/kubelet/dockershim/docker_service.go#L248) plugin [descriptors](https://github.com/kubernetes/kubernetes/blob/f4db8212be53c69a27d893d6a4111422fbce8008/pkg/kubelet/dockershim/network/plugins.go#L52) upon startup.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I dealt with CNI while developed kuryr-kubernetes CNI plugin. And I found cmd line interface of CNI is little bit out dated, it doesnt' conform to actual state, since all known for me CNI nowdays use daemons in runtime/and persistently run in pod and cmd line client to connect to its daemon. So RPC is better, what kind of RPC it's another question. I liked the way podresources were implemeted or device plugins. If neceserity this feature will be proven, I vote for the solution based on RPC (gRPC) working through unix domain socket.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Under design, I have included the structs for configuration file. There I specified three options for plugin types - binary file, unix socket and local grpc server. We can decide if we need to support just unix socket and remove the other two.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unlike device plugins or cni, there is an implication that no other pod is running on this host as part of bootstrapping the host. dns and sdn is configured outside of kube-api model itself.


The EKS user’s worker node administrative access depends on the type of worker node the EKS user chooses. EKS users have three options. The first option is to bring their own EC2 instances as worker nodes. The second option is for EKS users to launch a managed worker node group. These first two options both result in the EKS user maintaining full host-level administrative rights on the worker nodes. The final option — the option that motivated this proposal — is for the EKS user to forego worker node management entirely using AWS Fargate, a serverless computing environment. With AWS Fargate, the EKS user does not have host-level administrative access to their worker node; in fact, the worker node runs on a serverless computing platform that abstracts away the entire notion of a host.

In building the AWS EKS support for AWS Fargate, the AWS Kubernetes engineering team faced a dilemma: how could they prevent Pods destined to run on Fargate nodes from using host networking or assuming elevated host user privileges?
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are daemonset pods targeting this node type? is there a dns plugin or sdn configured via a daemonset? do things like a node exporter or similar monitoring components exclude this node type?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Currently daemonset pods doesn't run on these node types. But that's something we are looking into.


The approach taken is similar to the container networking interface (CNI) plugin architecture. With CNI, kubelet invokes one or more CNI plugin binaries on the host to set up a Pod’s networking. kubelet discovers available CNI plugins by [examining](https://github.com/kubernetes/kubernetes/blob/dd5272b76f07bea60628af0bb793f3cca385bf5e/pkg/kubelet/dockershim/docker_service.go#L242) a well-known directory (`/etc/cni/net.d`) for configuration files and [loading](https://github.com/kubernetes/kubernetes/blob/dd5272b76f07bea60628af0bb793f3cca385bf5e/pkg/kubelet/dockershim/docker_service.go#L248) plugin [descriptors](https://github.com/kubernetes/kubernetes/blob/f4db8212be53c69a27d893d6a4111422fbce8008/pkg/kubelet/dockershim/network/plugins.go#L52) upon startup.

To support pluggable validation for pod admission on the worker node, we propose to have kubelet similarly discover node-local Pod admission plugins listed in a new PodAdmissionPluginDir flag.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the implication here is that dynamic kubelet configuration is disabled on managed nodes.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's right. I guess you meant, if its enabled then users can update the kubelet configuration and change the plugin dir?

}
```

A node-local Pod admission plugin has the following structure:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

similar to kube-apiserver admission configuration, i could see wanting to enable more flexibility here outside of a file based configuration surface on the local kubelet. maybe kubelet can source its admission configuration from multiple sources so future scenarios could allow node local extension where desired.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good idea. I guess you mean we shouldn't stick with just file based configuration to read the plugin details but this could also be extended in future to read from different sources.


Kubelet will reject the Pod if any required capabilities in the Pod.Spec are not supported by the container engine running on the node. Such capabilities might include the ability to set sysctl parameters, use of elevated system privileges or use of a non-default process mount. Likewise, kubelet checks the Pod against node capabilities; for example, the presence of a specific apparmor profile or host kernel.

These validations represent final, last-minute checks immediately before the Pod is started by the container runtime. These node-local checks differ from API-layer validations like Pod Security Policies or Validating Admission webhooks. Whereas the latter may be deactivated or removed by Kubernetes cluster administrators, the former node-local checks cannot be disabled. As such, they represent a final defense against malicious actors and misconfigured Pods.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whereas the latter may be deactivated or removed by Kubernetes cluster administrators, the former node-local checks cannot be disabled.

This doesn't make much sense to me. Why are node-local checks different?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@tallclair node-local checks can be same, its just that these validations cant be removed by cluster administrator, whereas validation webhooks or psp can be removed by cluster admins.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See #1712, which proposes static admission webhooks. Does that address this concern?


Kubelet will reject the Pod if any required capabilities in the Pod.Spec are not supported by the container engine running on the node. Such capabilities might include the ability to set sysctl parameters, use of elevated system privileges or use of a non-default process mount. Likewise, kubelet checks the Pod against node capabilities; for example, the presence of a specific apparmor profile or host kernel.

These validations represent final, last-minute checks immediately before the Pod is started by the container runtime. These node-local checks differ from API-layer validations like Pod Security Policies or Validating Admission webhooks. Whereas the latter may be deactivated or removed by Kubernetes cluster administrators, the former node-local checks cannot be disabled. As such, they represent a final defense against malicious actors and misconfigured Pods.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they represent a final defense against malicious actors and misconfigured Pods

I think it's worth keeping this 2 use cases separate. It makes sense to me to have some protection against misconfigured pods, since not all configuration details are available at the cluster level. However, I'm more skeptical of node-level admission offering a increase in security over cluster-level admission.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

makes sense, will update this. Also, we do perform validations in the control plane and block such misconfigured pods. But if cluster admins removes webhook/psp or have their own scheduler that bypasses our checks then we need something on the node to block such pods from running.


Amazon Elastic Kubernetes Service (EKS) provides users a managed Kubernetes control plane. EKS users are provisioned a Kubernetes cluster running on AWS cloud infrastructure. While the EKS user does not have host-level administrative access to the master nodes, it is important to point out that they do have administrative rights on that Kubernetes cluster.

The EKS user’s worker node administrative access depends on the type of worker node the EKS user chooses. EKS users have three options. The first option is to bring their own EC2 instances as worker nodes. The second option is for EKS users to launch a managed worker node group. These first two options both result in the EKS user maintaining full host-level administrative rights on the worker nodes. The final option — the option that motivated this proposal — is for the EKS user to forego worker node management entirely using AWS Fargate, a serverless computing environment. With AWS Fargate, the EKS user does not have host-level administrative access to their worker node; in fact, the worker node runs on a serverless computing platform that abstracts away the entire notion of a host.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: please linewrap the whole document (I like 80 chars) so that it's easier to leave comments on parts of the paragraph.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry about that. Will update the doc.


In building the AWS EKS support for AWS Fargate, the AWS Kubernetes engineering team faced a dilemma: how could they prevent Pods destined to run on Fargate nodes from using host networking or assuming elevated host user privileges?

The team initially investigated using a Pod Security Policy (PSP) that would prevent Pods with a Fargate scheduler type from having an elevated security context or using host networking. However, because the EKS user has administrative rights on the Kubernetes cluster, API-layer constructs such as a Pod Security Policy may be deleted, which would effectively disable the effect of that PSP. Likewise, the second solution the team landed on — using Node taints and tolerations — was similarly bound to the Kubernetes API layer, which meant EKS users could modify those Node taints and tolerations, effectively disabling the effects. A third potential solution involving OCI hooks was then investigated. OCI hooks are separate executables that an OCI-compatible container runtime invokes that can modify the behaviour of the containers in a sandbox. While this solution would have solved the API-layer problem, it introduced other issues, such as the inefficiency of downloading the container image to the Node before the OCI hook was run.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This reads like you considered some built-in policy controls that didn't work, and then jumped to building a new node-level custom policy enforcement mechanism. What about custom cluster-level policy? We already have AdmissionWebhooks for exactly that reason. If your concern is a clusteradmin being able to mess with the admission webhook, then I would rather consider a statically configured admission webhook (I think this has already been proposed elsewhere?) before proposing a completely new mechanism in the kubelet.

Copy link
Author

@SaranBalaji90 SaranBalaji90 May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even with static admission webhook, the problem is we will put heavy load on the controllers right, because webhook is going to reject and controller will keep creating the pods. But adding it as soft admit handler in kubelet, will not put pressure on controllers.

Copy link
Author

@SaranBalaji90 SaranBalaji90 May 19, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, if I'm not wrong users in system:masters group can delete the validation webhook right.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BTW, #1712 is the proposal I was referring to.

the problem is we will put heavy load on the controllers right, because webhook is going to reject and controller will keep creating the pods.

Controllers will back off. I'm not sure if they treat a rejection on pod creation differently from a failed pod.

"type": "shell"
},
{
"type": "fargatecheck",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"type": "fargatecheck",
"name": "fargatecheck",


This functionality adds a new feature gate named “PodAdmissionPlugin” which decides whether to invoke admission plugin or not.

#### Kubelet to pod admission plugin communication
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the interface for the shell type? Send over stdin, and get a response over stdout?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, this is similar to how CNI operates today (passing required parameters using env variable and read the response over stdout).

}
```

#### Implementation detail
Copy link
Member

@tallclair tallclair May 7, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How does bootstrapping work? For the non-shell types, it looks like the assumption is that the server is running prior to the Kubelet?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes your right server should be running before kubelet can accept pods. I was initially thinking about managing these by the same component that manages kubelet. For eg in linux env, managing it through systemd process. But more I think about this, shell type might be simpler for this approach. Users don't have to monitor one more component on the host. Happy to hear your feedback here.

}
```

#### Implementation detail
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are the failure modes? Fail open or fail closed? How would a failure be debugged? Would kubelet start if it couldn't connect to an admission hook?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should be fail closed, reason being if we can't validate the spec and admit the pod then whatever we are trying to protect might be violated. Kubelet can start even if it couldn't connect to an admission hook, but will not accept pods without validating the pod spec with the plugin.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add these details to the KEP.


Other option is to enable this functionality through feature flag “enablePodAdmissionPlugin” and have the directory path defined inside the kubelet itself.

### Design Details
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A problem with the existing kubelet admission approach is that it can cause controllers to thrash. E.g. what if a DaemonSet controller is trying to schedule a pod on the Kubelet, and the kubelet keeps rejecting it?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Your right about this, but if a pod is rejected by soft admit handler then this doesn't apply right.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. Historically soft-reject was added because controllers treated a failed pod differently from an error on create. I think this has since been resolved, and controllers should properly backoff on failed pods. It would be good to clarify these interactions in the KEP.


Other option is to enable this functionality through feature flag “enablePodAdmissionPlugin” and have the directory path defined inside the kubelet itself.

### Design Details
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would admission still apply to static pods?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question, I will look into this. Not sure if kubelet invokes admit handlers for static pods.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whatever you conclude, please update the KEP to include it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like admit handlers are invoked on static pods too. I will update the KEP to reflect this. Thanks.

@tallclair tallclair self-assigned this May 19, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 20, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 19, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closed this PR.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@SaranBalaji90
Copy link
Author

We didn't go with this approach as https://github.com/kubernetes/enhancements/tree/master/keps/sig-api-machinery/1872-manifest-based-admission-webhooks will help to achieve the same result.

ingvagabund pushed a commit to ingvagabund/enhancements that referenced this pull request Feb 26, 2025
OCPCLOUD-1910: Installing Cluster API components in OpenShift
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area/provider/aws Issues or PRs related to aws provider area/provider/azure Issues or PRs related to azure provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. sig/node Categorizes an issue or PR as relevant to SIG Node. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

9 participants