-
Notifications
You must be signed in to change notification settings - Fork 38.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unable to deploy privileged pod after 1.8 upgrade unless I set allowPrivilegeEscalation true #53437
Comments
@jhorwit2
Note: Method 1 will trigger an email to the group. You can find the group list here and label list here. |
/sig auth |
semi-unrelated: the error message on that event is hideous 👼 |
@jessfraz @tallclair existing PSPs should continue to work as they did prior to the introduction of this field. doesn't that mean |
alternatively, edit: thinking through this more, I think we have to make AllowPrivilegeEscalation a |
[MILESTONENOTIFIER] Milestone Labels Complete @jessfraz @jhorwit2 @liggitt @tallclair Issue label settings:
|
Automatic merge from submit-queue (batch tested with PRs 53454, 53446, 52935, 53443, 52917). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. Use pointer for PSP allow escalation Fixes #53437 The `AllowPrivilegeEscalation` field was added to PodSpec and PodSecurityPolicySpec in 1.8.0. In order to remain compatible with pre-1.8.0 behavior, PodSecurityPolicy objects created against a previous release must not restrict this field, which means the field must default to true in PodSecurityPolicySpec. However, the field was added as a `bool`, not a `*bool`, which means that no defaulting is possible. We have two options: 1. Require all pre-existing PodSecurityPolicy objects that intend to allow privileged permissions to update to set this new field to true 2. Change the field to a `*bool` and default it to true. This PR does the latter. With this change, we have the following behavior: A 1.8.1+ client/server now has three ways to serialize: * `nil` values are dropped from serialization (because `omitempty`), which is interpreted correctly by other 1.8.1+ clients/servers, and is interpreted as false by 1.8.0 * `false` values are serialized and interpreted correctly by all clients/servers * `true` values are serialized and interpreted correctly by all clients/servers A 1.8.0 client/server has two ways to serialize: * `false` values are dropped from serialization (because `omitempty`), which is interpreted as `false` by other 1.8.0 clients/servers, but as `nil` (and therefore defaulting to true) by 1.8.1+ clients/servers * `true` values are serialized and interpreted correctly by all clients/servers The primary concern is the 1.8.0 server dropping the `false` value from serialization, but I consider the compatibility break with pre-1.8 behavior to be more severe, especially if we can resolve the regression in an immediate point release. ```release-note PodSecurityPolicy: Fixes a compatibility issue that caused policies that previously allowed privileged pods to start forbidding them, due to an incorrect default value for `allowPrivilegeEscalation`. PodSecurityPolicy objects defined using a 1.8.0 client or server that intended to set `allowPrivilegeEscalation` to `false` must be reapplied after upgrading to 1.8.1. ```
fixed in 1.8.1 |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
I upgraded a cluster to 1.8 and ran into an issue with pods that use
privileged: true
and don't setallowPrivilegeEscalation: true
.The PSP I had created and been using prior to 1.8:
The error I got when applying an update to the canal daemonset:
What you expected to happen:
I expect the pod to be valid since it was not noted as a breaking change in the release notes for 1.8.
How to reproduce it (as minimally and precisely as possible):
Using the PSP above, attempt to create a pod with
privileged: true
and no value forallowPrivilegeEscalation
.Anything else we need to know?:
Environment:
kubectl version
):cc @liggitt @tallclair @jessfraz
The text was updated successfully, but these errors were encountered: