-
Notifications
You must be signed in to change notification settings - Fork 295
0.12.3 -> 0.13.0-rc1 upgrade. Workloads fails to start due to pod security policy issue #1597
Comments
The problem is that @paalkr has existing PodSecurityPolicies in his cluster - so we don't automatically map all service accounts, users and nodes to our permissive policy. We only do that when there are no existing policies. @paalkr I suggest that you either create a new |
Updated release note |
Thanks for clarifying. I guess our best option to update our ClusterRoleBindings |
So my quick and dirty fix to make sure that the updated cluster functions the same ways as before the upgrade, is to manually deploy the kube-aws:permissive-psp-cluster-wide ClusterRoleBinding after updating the control-plane. For new clusters we will start to use proper Pod Security Policies
|
I'm closing this issue because I manage to work around the problem by manually deploy the permissive ClusterRoleBinding. I understand it's hard to fully automate this though, but I wonder if and how we might make a better upgrade experience |
I imagine you could do something like this as well
|
Yup, that worked! |
Workloads deployed to a 0.12.3 kube-aws cluster (kubernetes v 1.12.4) does not work after the cluster is updated to kube-aws 0.13.0-rc1 (kubernetes v 1.13.5).
The grafana container deployed with helm using the prometheus-operator chart does complain about apparmor not running in the node.
The error I get in the replica set for all pods not running in the kube-system namespace is.
Error creating: pods "<deployment_name>-<hash>-" is forbidden: unable to validate against any pod security policy: []
I imagine this case should be handled by the 00-kube-aws-permissive psp, as described in
#1589
Discussion on slack
https://kubernetes.slack.com/messages/C5GP8LPEC/convo/C5GP8LPEC-1558389645.080100/
The text was updated successfully, but these errors were encountered: