New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AppArmor fields API #123435
AppArmor fields API #123435
Conversation
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This PR may require API review. If so, when the changes are ready, complete the pre-review checklist and request an API review. Status of requested reviews is tracked in the API Review project. |
/remove-sig api-machinery |
533fe82
to
87cc793
Compare
87cc793
to
743c477
Compare
/label api-review |
/test pull-kubernetes-e2e-gce |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gate changes in the APIserver look good, one question on the kubelet side
e2e test failure is related:
failed [FAILED] Error creating Pod: Pod "test-apparmor-h56bd" is invalid: metadata.annotations[container.apparmor.security.beta.kubernetes.io/test]: Invalid value: "container.apparmor.security.beta.kubernetes.io/e2e-apparmor-test-apparmor-6537": invalid AppArmor profile name: "container.apparmor.security.beta.kubernetes.io/e2e-apparmor-test-apparmor-6537"
if !utilfeature.DefaultFeatureGate.Enabled(features.AppArmorFields) { | ||
return getProfileFromPodAnnotations(pod.Annotations, container.Name) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if the gate controls whether we let data into the field, and make the kubelet trigger off of presence of the field alone, would that work? we get control over the field via the gate in kube-apiserver, and skewed kubelets honor the field starting with 1.30
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is needed for static pods. Probably not necessary, but seems worth including for completeness.
/milestone v1.30 |
@MaryamTavakkoli: You must be a member of the kubernetes/milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Milestone Maintainers Team and have them propose you as an additional delegate for this responsibility. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/milestone v1.30 |
My IDE ran into a bunch of problems when I was doing the annotation rename, and missed a bunch of cases. When I went to manually clean it up, I accidentally mixed up the container key prefix & localhost value prefix, but didn't realize until I'd already squashed the commits. What a mess! I think it should be all fixed up now (including the failing e2e test). |
/retest |
/lgtm |
LGTM label has been added. Git tree hash: badfa862d5104905d8f3788cba76657267844663
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: liggitt, tallclair The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@tallclair was it marked as deprecated or fully removed? We got a report of someone running into problems where the apiserver would (I think) try to convert the annotation into a pod security context:
Is this a bug or should it be documented as |
The AppArmor annotation → pod security context translation only happens on pod create, not on update. I suspect someone updating the pod is doing a read / dropping the apparmor fields / update and so the server is preventing them from modifying the pod. I would strongly recommend they use a patch to modify just the scheduling gate field on update. |
Oh ok, so that's because we are using the 1.28 libraries. But yeah, I agree that patch/apply would be more reliable. Let me follow up on that :) |
What type of PR is this?
/kind feature
/kind api-change
What this PR does / why we need it:
Implement the API outlined in KEP-24: AppArmor, which converts the AppArmor annotations to SecurityContext fields.
Which issue(s) this PR fixes:
For kubernetes/enhancements#24
Special notes for your reviewer:
A bunch of this code was copied / adapted from the similar set of changes to migrate Seccomp annotations to fields:
This PR does not include the following, which will be addressed in a follow up:
Also, I decided to make a slight change to the Kubelet static pod behavior from what is described in https://github.com/kubernetes/enhancements/blob/master/keps/sig-node/24-apparmor/README.md#kubelet-fallback: Rather than applying the annotation/field sync to static pods, the Kubelet just falls back to the annotation value when computing the AppArmor profile for a container.
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
/sig node
/assign @liggitt @dchen1107 @SergeyKanzhelev