-
Notifications
You must be signed in to change notification settings - Fork 443
Description
Feature Request
Short Description
Pods should be blocked to start on newly provisioned nodes until KubeArmor daemonset is fully running on that node.
Is your feature request related to a problem? Please describe the use case.
When Kubernetes cluster is automatically provisioning new nodes (via Cluster Autoscaler or Karpenter) for pods which are matching some of the KubeArmor policies, those pods scheduled to run on a newly created node might start actually sooner(!) than KubeArmor daemonset is fully working. On such a dynamically created clusters like this, it's just impossible to ensure the policies are always fully effective.
IMHO similarly as with networking, starting of pods which require a specific feature (security restrictions in this case) should be delayed until the feature is available.
Describe the solution you'd like
I would follow the approach of Cilium (https://karpenter.sh/v1.0/concepts/nodepools/#cilium-startup-taint):
- require to provision nodes with a taint not tolerated by regular pods (e.g.,
kubearmor.io/agent-not-ready=true) - thus on a newly provisioned node, all common pods are blocked as they won't tolerate the taint
- kubearmor starts and when it's fully ready to work, removes the taint on a particular node
- this unblocks all other pods, which will start from now on and will be covered by policies