Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AppArmor Profile not activated. #2310

Open
Jeansen opened this issue Jun 15, 2024 · 0 comments
Open

AppArmor Profile not activated. #2310

Jeansen opened this issue Jun 15, 2024 · 0 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@Jeansen
Copy link

Jeansen commented Jun 15, 2024

What happened:

I installed SPO and followed the documentation regarding an example installation of an AppArmor Profile. I am running Kubernetes 1.30.0. If I use the securityContext clause, it has no effect. Even more, after Pod creation, its content is deleted. If I use the deprecated annotation, I get an error telling me The Pod "testpod2" is invalid: metadata.annotations[container.apparmor.security.beta.kubernetes.io/testpod2]: Invalid value: test-profile: invalid AppArmor profile name: test-profile

What you expected to happen:

I'd expect, after preparing everything by the book to have a Pod running with an AppArmor Profile.

How to reproduce it (as minimally and precisely as possible):

I've installed SPO via OLM:

---
apiVersion: v1
kind: Namespace
metadata:
  name: security-profiles-operator
  labels:
    openshift.io/cluster-monitoring: "true"
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: security-profiles-operator
  namespace: security-profiles-operator
spec:
  targetNamespaces:
  - security-profiles-operator
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: security-profiles-operator-sub
  namespace: security-profiles-operator
spec:
  channel: stable
  name: security-profiles-operator
  source: operatorhubio-catalog
  sourceNamespace: olm

I then applied the patch and created an example Profile, as documented in: https://github.com/kubernetes-sigs/security-profiles-operator/blob/main/installation-usage.md#create-an-apparmor-profile

I can verify that up to this point, all is fine:

$ k -n security-profiles-operator get spod spod -o yaml

apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: SecurityProfilesOperatorDaemon
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"security-profiles-operator.x-k8s.io/v1alpha1","kind":"SecurityProfilesOperatorDaemon","metadata":{"annotations":{},"name":"spod","namespace":"security-profiles-operator"},"spec":{"enableAppArmor":true,"enableLogEnricher":false,"enableSelinux":false}}
  creationTimestamp: "2024-06-13T18:34:41Z"
  generation: 4
  labels:
    app: security-profiles-operator
  name: spod
  namespace: security-profiles-operator
  resourceVersion: "5738208"
  uid: b0427364-278a-43a9-bfeb-1b5cfa1ead63
spec:
  disableOciArtifactSignatureVerification: false
  enableAppArmor: true
  enableLogEnricher: false
  enableSelinux: false
  hostProcVolumePath: /proc
  priorityClassName: system-node-critical
  selinuxOptions:
    allowedSystemProfiles:
    - container
  selinuxTypeTag: spc_t
  staticWebhookConfig: false
  tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
    operator: Exists
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
status:
  conditions:
  - lastTransitionTime: "2024-06-13T23:08:28Z"
    reason: Available
    status: "True"
    type: Ready
  state: RUNNING


$ k get securityprofilenodestatuses.security-profiles-operator.x-k8s.io

NAME                           STATUS      AGE
test-profile-master0-k8s.lan   Installed   45h
test-profile-master1-k8s.lan   Installed   45h
test-profile-master2-k8s.lan   Installed   45h
test-profile-worker1-k8s.lan   Installed   45h
test-profile-worker2-k8s.lan   Installed   45h
test-profile-worker3-k8s.lan   Installed   45h
test-profile-worker4-k8s.lan   Installed   45h


$ k get apparmorprofiles.security-profiles-operator.x-k8s.io test-profile -o yaml

apiVersion: security-profiles-operator.x-k8s.io/v1alpha1
kind: AppArmorProfile
metadata:
  annotations:
    description: Block writing to any files in the disk.
  creationTimestamp: "2024-06-13T20:35:30Z"
  finalizers:
  - worker4-k8s.lan-deleted
  - master1-k8s.lan-deleted
  - master0-k8s.lan-deleted
  - worker2-k8s.lan-deleted
  - worker3-k8s.lan-deleted
  - master2-k8s.lan-deleted
  - worker1-k8s.lan-deleted
  generation: 1
  labels:
    spo.x-k8s.io/profile-id: AppArmorProfile-test-profile
  name: test-profile
  namespace: default
  resourceVersion: "5720939"
  uid: ea8c3705-1ba2-4b14-afc5-d0a05aa958fc
spec:
  policy: |
    #include <tunables/global>

    profile test-profile flags=(attach_disconnected) {
      #include <abstractions/base>

      file,

      # Deny all file writes.
      deny /** w,
    }

Here is my simple Pod yaml used in a first test:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: testpod2
  name: testpod2
  annotations:
    container.apparmor.security.beta.kubernetes.io/testpod2: test-profile
spec:
# securityContext:
#   appArmorProfile:
#     type: Localhost
#     localhostProfile: test-profile
  containers:
  - image: nginx
    name: testpod2
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Never
status: {}

When I would like to create this Pod, it looks like this:

Warning: metadata.annotations[container.apparmor.security.beta.kubernetes.io/testpod2]: deprecated since v1.30; use the "appArmorProfile" field instead
The Pod "testpod2" is invalid: metadata.annotations[container.apparmor.security.beta.kubernetes.io/testpod2]: Invalid value: "test-profile": invalid AppArmor profile name: "test-profile"

If I remove the annotation and uncomment the securityContext, the Pod will be created, but no AppArmor Profile is active. And if I check the deployed Pod, it looks like this then:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: "2024-06-15T19:41:11Z"
  labels:
    run: testpod2
  name: testpod2
  namespace: default
  resourceVersion: "5759727"
  uid: 67ec29a1-c224-4832-9c1a-0c79f71d8aa4
spec:
  containers:
  - image: nginx
    imagePullPolicy: Always
    name: testpod2
    resources: {}
    securityContext: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: kube-api-access-xqpz2
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  nodeName: worker4-k8s.lan
  preemptionPolicy: PreemptLowerPriority
  priority: 0
  restartPolicy: Never
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300
  volumes:
  - name: kube-api-access-xqpz2
    projected:
      defaultMode: 420
      sources:
      - serviceAccountToken:
          expirationSeconds: 3607
          path: token
      - configMap:
          items:
          - key: ca.crt
            path: ca.crt
          name: kube-root-ca.crt
      - downwardAPI:
          items:
          - fieldRef:
              apiVersion: v1
              fieldPath: metadata.namespace
            path: namespace
status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2024-06-15T19:41:13Z"
    status: "True"
    type: PodReadyToStartContainers
  - lastProbeTime: null
    lastTransitionTime: "2024-06-15T19:41:11Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2024-06-15T19:41:13Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2024-06-15T19:41:13Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2024-06-15T19:41:11Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: cri-o://c48853001023f76e93ece437368807343209a3711706e3a16a52c352cbac2f73
    image: docker.io/library/nginx:latest
    imageID: docker.io/library/nginx@sha256:0f04e4f646a3f14bf31d8bc8d885b6c951fdcf42589d06845f64d18aec6a3c4d
    lastState: {}
    name: testpod2
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2024-06-15T19:41:13Z"
  hostIP: 192.168.121.153
  hostIPs:
  - ip: 192.168.121.153
  phase: Running
  podIP: 10.0.2.208
  podIPs:
  - ip: 10.0.2.208
  qosClass: BestEffort
  startTime: "2024-06-15T19:41:11Z"

Anything else we need to know?:

Environment:

  • Cloud provider or hardware configuration:

  • OS (e.g: cat /etc/os-release): Debian 12

  • Kernel (e.g. uname -a): Host, running Node VMs: Linux cluster 6.8.12-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.8.12-1 (2024-05-31) x86_64 GNU/Linux

  • Others:
    Kuberntes 1.30.0. Nodes created with QEMU/KVM. Example Kernel vom Master0 Node: Linux master0-k8s.lan 6.1.0-15-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.66-1 (2023-12-09) x86_64 GNU/Linux

Here's what I get, when I check for AppArmor from withing the Pod:

$ kubectl exec testpod2 -- cat /proc/1/attr/current

crio-default (enforce)

And here's what I see regarding loaded Profiles on each Node:

$ sudo cat /sys/kernel/security/apparmor/profiles

test-profile (enforce)
crio-default (enforce)
/{,usr/}sbin/dhclient (enforce)
/usr/lib/connman/scripts/dhclient-script (enforce)
/usr/lib/NetworkManager/nm-dhcp-helper (enforce)
/usr/lib/NetworkManager/nm-dhcp-client.action (enforce)
/usr/sbin/chronyd (enforce)
nvidia_modprobe (enforce)
nvidia_modprobe//kmod (enforce)
man_groff (enforce)
man_filter (enforce)
/usr/bin/man (enforce)
lsb_release (enforce)
@Jeansen Jeansen added the kind/bug Categorizes issue or PR as related to a bug. label Jun 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

1 participant