Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong detection of AllowPrivilegeEscalation (policy AC-K8-CA-PO-H-0165) in K8s pod spec #721

Closed
MMerzinger opened this issue May 1, 2021 · 10 comments
Assignees
Labels
bug policy Issue concerning policy maintainers.

Comments

@MMerzinger
Copy link

Hello everyone

  • terrascan version: 1.5.1 (using the docker image)
  • Operating System: Darwin My-MacBook-Pro.local 20.3.0 Darwin Kernel Version 20.3.0

Description

During a scan with my pod spec I was always getting the policy violation "Containers Should Not Run with AllowPrivilegeEscalation". The problem: I configured the container security context properly with allowPrivilegeEscalation: false.

The assumption is that terrascan expects allowPrivilegeEscalation: "false" under pod.spec.securityContext, but this is an invalid pod spec. allowPrivilegeEscalation: false has to be under pod.spec.container.securityContext and false should not be in quotes.

My expectation would be that the valid pod spec with allowPrivilegeEscalation: false under pod.spec.container.securityContext does not violate the policy.

What I Did

Terrascan version:

docker run --rm -it -v "$(pwd):/iac" -w /iac accurics/terrascan version
version: v1.5.1

Kubernetes version:

kubectl version
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.1", GitCommit:"206bcadf021e76c27513500ca24182692aabd17e", GitTreeState:"clean", BuildDate:"2020-09-09T19:10:58Z", GoVersion:"go1.15.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.9", GitCommit:"6c90dbd9d6bb1ae8a4c0b0778752be06873e7c55", GitTreeState:"clean", BuildDate:"2021-03-22T23:02:49Z", GoVersion:"go1.15.8", Compiler:"gc", Platform:"linux/amd64"}

The following spec is valid (i.e. kubectl apply -f works), but violates the AllowPrivilegeEscalation policy (shown by the scan after the pod spec).

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: busybox
  namespace: non-default
  annotations:
    container.apparmor.security.beta.kubernetes.io/busybox: runtime/default
spec:
  securityContext:
    runAsNonRoot: true
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000
    seccompProfile:
      type: RuntimeDefault
  containers:
  - image: busybox:1.28
    name: busybox
    readinessProbe:
      httpGet:
        path: /
        port: http
    livenessProbe:
      httpGet:
        path: /
        port: http
    securityContext:
      allowPrivilegeEscalation: false
      readOnlyRootFilesystem: true
      runAsNonRoot: true
      runAsUser: 1000
    command:
    - sleep
    - "3600"
    resources:
      requests:
        cpu: "50m"
        memory: "100Mi"
      limits:
        cpu: "100m"
        memory: "200Mi"
  dnsPolicy: ClusterFirst
  restartPolicy: Always

And the scan result:

docker run --rm -it -v "$(pwd):/iac" -w /iac accurics/terrascan scan -i k8s -f pod.valid-spec-but-scanner-shows-violated-policy.yaml


Violation Details -

	Description    :	Container images with readOnlyRootFileSystem set as false mounts the container root file system with write permissions
	File           :	pod.valid-spec-but-scanner-shows-violated-policy.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	Containers Should Not Run with AllowPrivilegeEscalation
	File           :	pod.valid-spec-but-scanner-shows-violated-policy.yaml
	Line           :	1
	Severity       :	HIGH
	-----------------------------------------------------------------------

	Description    :	Image without digest affects the integrity principle of image security
	File           :	pod.valid-spec-but-scanner-shows-violated-policy.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	Memory Request Not Set in config file.
	File           :	pod.valid-spec-but-scanner-shows-violated-policy.yaml
	Line           :	1
	Severity       :	Medium
	-----------------------------------------------------------------------

	Description    :	No liveness probe will ensure there is no recovery in case of unexpected errors
	File           :	pod.valid-spec-but-scanner-shows-violated-policy.yaml
	Line           :	1
	Severity       :	LOW
	-----------------------------------------------------------------------

	Description    :	No readiness probe will affect automatic recovery in case of unexpected errors
	File           :	pod.valid-spec-but-scanner-shows-violated-policy.yaml
	Line           :	1
	Severity       :	LOW
	-----------------------------------------------------------------------

	Description    :	CPU Request Not Set in config file.
	File           :	pod.valid-spec-but-scanner-shows-violated-policy.yaml
	Line           :	1
	Severity       :	Medium
	-----------------------------------------------------------------------

	Description    :	Default seccomp profile not enabled will make the container to make non-essential system calls
	File           :	pod.valid-spec-but-scanner-shows-violated-policy.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------


Scan Summary -

	File/Folder         :	/iac/pod.valid-spec-but-scanner-shows-violated-policy.yaml
	IaC Type            :	k8s
	Scanned At          :	2021-05-01 15:33:56.3923948 +0000 UTC
	Policies Validated  :	562
	Violated Policies   :	8
	Low                 :	2
	Medium              :	5
	High                :	1
kubectl apply -f valid-spec-but-scanner-shows-violated-policy.yaml
pod/busybox created

To stop the policy violation I had to put allowPrivilegeEscalation: "false" (note: the quotes) into pod.spec.securityContext and pod.spec.containers.securityContext. But this results in a pod spec that is invalid:

apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: busybox
  name: busybox
  namespace: non-default
  annotations:
    container.apparmor.security.beta.kubernetes.io/busybox: runtime/default
spec:
  securityContext:
    runAsNonRoot: true
    allowPrivilegeEscalation: "false"
    runAsUser: 1000
    runAsGroup: 1000
    fsGroup: 1000
    seccompProfile:
      type: RuntimeDefault
  containers:
  - image: busybox:1.28
    name: busybox
    readinessProbe:
      httpGet:
        path: /
        port: http
    livenessProbe:
      httpGet:
        path: /
        port: http
    securityContext:
      readOnlyRootFilesystem: true
      allowPrivilegeEscalation: "false"
      runAsNonRoot: true
      runAsUser: 1000
    command:
    - sleep
    - "3600"
    resources:
      requests:
        cpu: "50m"
        memory: "100Mi"
      limits:
        cpu: "100m"
        memory: "200Mi"
  dnsPolicy: ClusterFirst
  restartPolicy: Always
docker run --rm -it -v "$(pwd):/iac" -w /iac accurics/terrascan scan -i k8s -f pod.invalid-spec-and-scanner-shows-shows-no-violated-policy.yaml


Violation Details -

	Description    :	Image without digest affects the integrity principle of image security
	File           :	pod.invalid-spec-and-scanner-shows-shows-no-violated-policy.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	Container images with readOnlyRootFileSystem set as false mounts the container root file system with write permissions
	File           :	pod.invalid-spec-and-scanner-shows-shows-no-violated-policy.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	CPU Request Not Set in config file.
	File           :	pod.invalid-spec-and-scanner-shows-shows-no-violated-policy.yaml
	Line           :	1
	Severity       :	Medium
	-----------------------------------------------------------------------

	Description    :	Memory Request Not Set in config file.
	File           :	pod.invalid-spec-and-scanner-shows-shows-no-violated-policy.yaml
	Line           :	1
	Severity       :	Medium
	-----------------------------------------------------------------------

	Description    :	No liveness probe will ensure there is no recovery in case of unexpected errors
	File           :	pod.invalid-spec-and-scanner-shows-shows-no-violated-policy.yaml
	Line           :	1
	Severity       :	LOW
	-----------------------------------------------------------------------

	Description    :	Default seccomp profile not enabled will make the container to make non-essential system calls
	File           :	pod.invalid-spec-and-scanner-shows-shows-no-violated-policy.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	No readiness probe will affect automatic recovery in case of unexpected errors
	File           :	pod.invalid-spec-and-scanner-shows-shows-no-violated-policy.yaml
	Line           :	1
	Severity       :	LOW
	-----------------------------------------------------------------------


Scan Summary -

	File/Folder         :	/iac/pod.invalid-spec-and-scanner-shows-shows-no-violated-policy.yaml
	IaC Type            :	k8s
	Scanned At          :	2021-05-01 15:33:37.2386187 +0000 UTC
	Policies Validated  :	562
	Violated Policies   :	7
	Low                 :	2
	Medium              :	5
	High                :	0
kubectl apply -f pod.invalid-spec-and-scanner-shows-shows-no-violated-policy.yaml
error: error validating "pod.invalid-spec-and-scanner-shows-shows-no-violated-policy.yaml": error validating data: [ValidationError(Pod.spec.containers[0].securityContext.allowPrivilegeEscalation): invalid type for io.k8s.api.core.v1.SecurityContext.allowPrivilegeEscalation: got "string", expected "boolean", ValidationError(Pod.spec.securityContext): unknown field "allowPrivilegeEscalation" in io.k8s.api.core.v1.PodSecurityContext]; if you choose to ignore these errors, turn validation off with --validate=false
@vhnguyenae
Copy link

Hello @MMerzinger ,
I had the same issue and even though with the latest release 1.7.0 (already included #787) I still get the same error . Did you try if it fix your problem in the latest release?

@MMerzinger
Copy link
Author

Hi @vhnguyenae

Yes, if I run the above example again, it works fine:

docker run --rm -it -v "$(pwd):/iac" -w /iac accurics/terrascan version
version: v1.7.0
docker run --rm -it -v "$(pwd):/iac" -w /iac accurics/terrascan scan -i k8s -f pod.valid-spec-but-scanner-shows-violated-policy.yaml


Violation Details -

	Description    :	Image without digest affects the integrity principle of image security
	File           :	pod.valid-spec-but-scanner-shows-violated-policy.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	Default seccomp profile not enabled will make the container to make non-essential system calls
	File           :	pod.valid-spec-but-scanner-shows-violated-policy.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------


Scan Summary -

	File/Folder         :	/iac/pod.valid-spec-but-scanner-shows-violated-policy.yaml
	IaC Type            :	k8s
	Scanned At          :	2021-06-15 15:25:47.225520351 +0000 UTC
	Policies Validated  :	81
	Violated Policies   :	2
	Low                 :	0
	Medium              :	2
	High                :	0

Can you share your K8s templates that violate the policy?

@vhnguyenae
Copy link

Hi @MMerzinger ,
Thanks for your reply, and yes here is my template

apiVersion: ...
kind: Deployment
metadata:
  name: ...
  labels:
    app: ...
    chart: ...
    release:...
    heritage: ...
    name: ...
spec:
  securityContext:
    runAsNonRoot: true
    allowPrivilegeEscalation: false
  replicas: ...
  selector:
    matchLabels:
      app: ...
      release: ...
      name: ...
  template:
    metadata:
      labels:
        app: ...
        release: ...
        name: ...
    spec:
      containers:
        - name: ...
          image: ...
          imagePullPolicy: ...
          env:
            - name: ...
              value: ...
          securityContext:
            runAsNonRoot: true
            allowPrivilegeEscalation: false

@MMerzinger
Copy link
Author

Hi @vhnguyenae,

I cannot reproduce the issue with your template.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: test
  labels:
    app: test
    chart: test
    release: test
    heritage: test
    name: test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: test
      release: test
      name: test
  template:
    metadata:
      labels:
        app: test
        release: test
        name: test
    spec:
      containers:
        - name: test
          image: busybox:1.28
          imagePullPolicy: Always
          env:
            - name: TEST
              value: TEST
          securityContext:
            runAsNonRoot: true
            allowPrivilegeEscalation: false
docker run --rm -it -v "$(pwd):/iac" -w /iac accurics/terrascan scan -i k8s -f deploy.yaml


Scan Summary -

	File/Folder         :	/iac/deploy.yaml
	IaC Type            :	k8s
	Scanned At          :	2021-06-16 18:53:43.3100775 +0000 UTC
	Policies Validated  :	0
	Violated Policies   :	0
	Low                 :	0
	Medium              :	0
	High                :	0

At first glance, the template appears to be invalid (you can quickly check it with https://www.kubeyaml.com). A deployment does not have the securityContext field. It only exists under pod.spec or pod.spec.containers. Furthermore, pod.spec.securityContext.allowPrivilegeEscalation does not exist, it only exists under pod.spec.containers.securityContext.allowPrivilegeEscalation.

@vhnguyenae
Copy link

vhnguyenae commented Jun 17, 2021

Hi @MMerzinger ,
Thanks for pointing out first invalid securityContext in my configuration.
We combine helm with k8s so I run the command with " -i helm" option, if I run the scan using "-i k8s" I don't have the error neither.
Can you please try to run the config above with " -i helm" to see if you get the same error like mine?
Thanks in advance.

@MMerzinger
Copy link
Author

Hi @vhnguyenae,

unfortunately running the command with "-i helm" would not work as it is currently just a "normal" yaml spec and not a helm chart:

docker run --rm -it -v "$(pwd):/iac" -w /iac accurics/terrascan scan -i helm -f deploy.vhnguyenae.yaml
2021-06-17T13:04:47.070Z	error	v3/load-file.go:32	load iac file is not supported for helm
2021-06-17T13:04:47.074Z	error	cli/run.go:113	scan run failed{error 26 0  load iac file is not supported for helm}

Please open a new issue and provide the necessary details about your helm chart and possibly reference this issue. This will help the developers of this repo to analyse your problem. You can find more details about contributions to this repo under this link.
FYI: I found this validation issue, but I am not an active contributor to this repo.

@abzcoding
Copy link

@MMerzinger @vhnguyenae
I have the same issue (i'm running it on mac os)

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-VERSION
  namespace: my-namespace
  labels:
    app: myapp
    version: "VERSION"
  annotations:
    container.apparmor.security.beta.kubernetes.io/myapp: runtime/default
    seccomp.security.alpha.kubernetes.io/defaultProfileName: runtime/default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: myapp
      version: "VERSION"
  template:
    metadata:
      labels:
        app: myapp
        version: "VERSION"
    spec:
      securityContext:
        runAsUser: 1000
      containers:
        - name: myapp
          image: git.private.sth/security/myapp:<DEPLOYMENT_TAG>
          resources:
            requests:
              memory: "750Mi"
              cpu: "0.4"
            limits:
              memory: "1000Mi"
              cpu: "1"
          imagePullPolicy: Always
          ports:
            - containerPort: 8000
          envFrom:
            - configMapRef:
                name: myapp
            - secretRef:
                name: myapp
          readinessProbe:
            httpGet:
              path: /health-status/readiness
              port: 8000
            initialDelaySeconds: 3
            periodSeconds: 5
          livenessProbe:
            httpGet:
              path: /health-status/liveness
              port: 8000
            initialDelaySeconds: 3
            periodSeconds: 5
          securityContext:
            allowPrivilegeEscalation: false
            runAsNonRoot: true
            capabilities:
              drop:
                - NET_ADMIN
                - CHOWN
                - DAC_OVERRIDE
                - FSETID
                - FOWNER
                - MKNOD
                - NET_RAW
                - SETGID
                - SETUID
                - SETFCAP
                - SETPCAP
                - NET_BIND_SERVICE
                - SYS_CHROOT
                - KILL
                - AUDIT_WRITE
      imagePullSecrets:
        - name: regsecret

and when i run the following command:

>terrascan version
version: v1.7.0
terrascan scan -i k8s -t k8s --severity high -v -f dep.yml

i'm still getting the errors:

Violation Details -

        Description    :        Containers Should Not Run with AllowPrivilegeEscalation
        File           :        dep.yml
        Line           :        1
        Severity       :        HIGH
        Rule Name      :        privilegeEscalationCheck
        Rule ID        :        AC-K8-CA-PO-H-0165
        Resource Name  :        myapp-VERSION
        Resource Type  :        kubernetes_deployment
        Category       :        Compliance Validation

        -----------------------------------------------------------------------

        Description    :        Minimize Admission of Root Containers
        File           :        dep.yml
        Line           :        1
        Severity       :        HIGH
        Rule Name      :        runAsNonRootCheck
        Rule ID        :        AC-K8-IA-PO-H-0168
        Resource Name  :        myapp-VERSION
        Resource Type  :        kubernetes_deployment
        Category       :        Identity and Access Management

        -----------------------------------------------------------------------


Scan Summary -

        File/Folder         :   /tmp/dep.yml
        IaC Type            :   k8s
        Scanned At          :   2021-06-18 04:49:09.075264 +0000 UTC
        Policies Validated  :   25
        Violated Policies   :   2
        Low                 :   0
        Medium              :   0
        High                :   2

@MMerzinger
Copy link
Author

Hi @vhnguyenae and @abzcoding

I tried to reproduce the issue with the spec provided by @abzcoding, but I have no results.

docker run --rm -it -v "$(pwd):/iac" -w /iac accurics/terrascan version
version: v1.7.0

docker run --rm -it -v "$(pwd):/iac" -w /iac accurics/terrascan scan -i k8s -t k8s


Violation Details -

	Description    :	Container images with readOnlyRootFileSystem set as false mounts the container root file system with write permissions
	File           :	github-tests/dep.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	Default seccomp profile not enabled will make the container to make non-essential system calls
	File           :	github-tests/dep.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	Image without digest affects the integrity principle of image security
	File           :	github-tests/dep.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	AppArmor profile not set to default or custom profile will make the container vulnerable to kernel level threats
	File           :	github-tests/dep.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------


Scan Summary -

	File/Folder         :	/iac
	IaC Type            :	k8s
	Scanned At          :	2021-06-18 07:09:01.7494341 +0000 UTC
	Policies Validated  :	49
	Violated Policies   :	4
	Low                 :	0
	Medium              :	4
	High                :	0

With a local installation:

terrascan version
version: v1.7.0

terrascan scan -i k8s -t k8s


Violation Details -

	Description    :	Container images with readOnlyRootFileSystem set as false mounts the container root file system with write permissions
	File           :	github-tests/dep.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	AppArmor profile not set to default or custom profile will make the container vulnerable to kernel level threats
	File           :	github-tests/dep.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	Image without digest affects the integrity principle of image security
	File           :	github-tests/dep.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------

	Description    :	Default seccomp profile not enabled will make the container to make non-essential system calls
	File           :	github-tests/dep.yaml
	Line           :	1
	Severity       :	MEDIUM
	-----------------------------------------------------------------------


Scan Summary -

	File/Folder         :	/tmp
	IaC Type            :	k8s
	Scanned At          :	2021-06-18 09:21:19.453195 +0000 UTC
	Policies Validated  :	49
	Violated Policies   :	4
	Low                 :	0
	Medium              :	4
	High                :	0

Do you guys use the latest policy set? A look at the commit that closed this issue shows that only the policy file was changed (and its metadata json). I tried to reproduce the issue with the latest docker image and a local installation via brew - no results. But in my case both approaches use the lastest policy set.

Regards
Marc

@abzcoding
Copy link

abzcoding commented Jun 18, 2021

Hi @vhnguyenae and @MMerzinger

MMerzinger You're right, it seems that my policy files where not up to date.
after updating them, the issue is fixed.

just to be sure, i deleted terrascan and installed it again. and i have no problems anymore.

brew uninstall terrascan
brew cleanup
rm -rf ~/.terrascan
brew install terrascan

@vhnguyenae
Copy link

thank you guys, after the init command it seems working fine for ;me :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug policy Issue concerning policy maintainers.
Projects
None yet
Development

No branches or pull requests

6 participants