Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CVE-2020-8558: Node setting allows for neighboring hosts to bypass localhost boundary #92315

Closed
joelsmith opened this issue Jun 19, 2020 · 17 comments · Fixed by #92938
Closed
Labels
area/kubelet area/security committee/security-response Denotes an issue or PR intended to be handled by the product security committee. kind/bug Categorizes issue or PR as related to a bug. official-cve-feed Issues or PRs related to CVEs officially announced by Security Response Committee (SRC) sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@joelsmith
Copy link
Contributor

joelsmith commented Jun 19, 2020

CVSS Rating:

In typical clusters: medium (5.4) CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N

In clusters where API server insecure port has not been disabled: high (8.8) CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

A security issue was discovered in kube-proxy which allows adjacent hosts to reach TCP and UDP services bound to 127.0.0.1 running on the node or in the node's network namespace. For example, if a cluster administrator runs a TCP service on a node that listens on 127.0.0.1:1234, because of this bug, that service would be potentially reachable by other hosts on the same LAN as the node, or by containers running on the same node as the service. If the example service on port 1234 required no additional authentication (because it assumed that only other localhost processes could reach it), then it could be vulnerable to attacks that make use of this bug.

The Kubernetes API Server's default insecure port setting causes the API server to listen on 127.0.0.1:8080 where it will accept requests without authentication. Many Kubernetes installers explicitly disable the API Server's insecure port, but in clusters where it is not disabled, an attacker with access to another system on the same LAN or with control of a container running on the master may be able to reach the API server and execute arbitrary API requests on the cluster. This port is deprecated, and will be removed in Kubernetes v1.20.

Am I vulnerable?

You may be vulnerable if:

  • You are running a vulnerable version (see below)

  • Your cluster nodes run in an environment where untrusted hosts share the same layer 2 domain (i.e. same LAN) as nodes

  • Your cluster allows untrusted pods to run containers with CAP_NET_RAW (the Kubernetes default is to allow this capability).

  • Your nodes (or hostnetwork pods) run any localhost-only services which do not require any further authentication. To list services that are potentially affected, run the following commands on nodes:
       - lsof +c 15 -P -n -i4TCP@127.0.0.1 -sTCP:LISTEN
      - lsof +c 15 -P -n -i4UDP@127.0.0.1

    On a master node, an lsof entry like this indicates that the API server may be listening with an insecure port:

COMMAND        PID  USER FD   TYPE DEVICE SIZE/OFF NODE NAME
kube-apiserver 123  root  7u  IPv4  26799      0t0  TCP 127.0.0.1:8080 (LISTEN)

Affected Versions

  • kubelet/kube-proxy v1.18.0-1.18.3
  • kubelet/kube-proxy v1.17.0-1.17.6
  • kubelet/kube-proxy <=1.16.10

How do I mitigate this vulnerability?

Prior to upgrading, this vulnerability can be mitigated by manually adding an iptables rule on nodes. This rule will reject traffic to 127.0.0.1 which does not originate on the node.

 iptables -I INPUT --dst 127.0.0.0/8 ! --src 127.0.0.0/8 -m conntrack ! --ctstate RELATED,ESTABLISHED,DNAT -j DROP

Additionally, if your cluster does not already have the API Server insecure port disabled, we strongly suggest that you disable it. Add the following flag to your kubernetes API server command line: --insecure-port=0

Detection

Packets on the wire with an IPv4 destination in the range 127.0.0.0/8 and a layer-2 destination MAC address of a node may indicate that an attack is targeting this vulnerability.

Fixed Versions

Although the issue is caused by kube-proxy, the current fix for the issue is in kubelet (although future versions may have the fix in kube-proxy instead). We recommend updating both kubelet and kube-proxy to be sure the issue is addressed.

The following versions contain the fix:

To upgrade, refer to the documentation: https://kubernetes.io/docs/tasks/administer-cluster/cluster-management/#upgrading-a-cluster

Additional Details

This issue was originally raised in issue #90259 which details how the kube-proxy sets net.ipv4.conf.all.route_localnet=1 which causes the system not to reject traffic to localhost which originates on other hosts.

IPv6-only services that bind to a localhost address are not affected. 

There may be additional attack vectors possible in addition to those fixed by #91569 and its cherry-picks. For those attacks to succeed, the target service would need to be UDP and the attack could only rely upon sending UDP datagrams since it wouldn't receive any replies. Finally, the target node would need to have reverse-path filtering disabled for an attack to have any effect. Work is ongoing to determine whether and how this issue should be fixed. See #91666 for up-to-date status on this issue.  

Acknowledgements

This vulnerability was reported by János Kövér, Ericsson with additional impacts reported by Rory McCune, NCC Group and Yuval Avrahami and Ariel Zelivansky, Palo Alto Networks.

/area security
/kind bug
/committee product-security
/sig network
/sig node
/area kubelet

@joelsmith joelsmith added the kind/bug Categorizes issue or PR as related to a bug. label Jun 19, 2020
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 19, 2020
@k8s-ci-robot k8s-ci-robot added area/security committee/security-response Denotes an issue or PR intended to be handled by the product security committee. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node. area/kubelet and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jul 8, 2020
@joelsmith joelsmith changed the title Placeholder issue CVE-2020-8558: Node setting allows for neighboring hosts to bypass localhost boundary Jul 8, 2020
@tallclair
Copy link
Member

For insecure port removal: #91506

@joelsmith
Copy link
Contributor Author

The original announcement contained an error. The announcement makes it sound like upgrading kube-proxy will address the issue, but that is incorrect. I have edited the issue summary to add the following paragraph in the "Fixed Versions" section:

Although the issue is caused by kube-proxy, the current fix for the issue is in kubelet (although future versions may have the fix in kube-proxy instead). We recommend updating both kubelet and kube-proxy to be sure the issue is addressed.

Additionally, I have changed the components in listed versions to say kubelet/kube-proxy instead of kube-proxy.

@jlk
Copy link

jlk commented Jul 8, 2020

Could you add numerical CVSS scores to these? "High" is too vague, frequently varies from vendor to vendor. I can calculate a score from the vector myself, but adding the score would save us some work...

@joelsmith
Copy link
Contributor Author

Edited. Also, the vector scores are clickable and will show the CVSS scores too.

@cjellick
Copy link
Contributor

cjellick commented Jul 8, 2020

To be clear, this is fixed in v1.18.4+, v1.17.7+, and v1.16.11+ correct?

@joelsmith
Copy link
Contributor Author

Right. Sorry if the extra paragraph in "Fixed Versions" made that unclear.

@cjellick
Copy link
Contributor

cjellick commented Jul 8, 2020

Got it. Thank you!

@mgalgs
Copy link

mgalgs commented Jul 8, 2020

Your cluster nodes run in an environment where untrusted hosts share the same layer 2 domain (i.e. same LAN) as nodes

So just to be sure, this wouldn't be a problem in an EKS cluster running in its own VPC, right? (assuming no untrusted hosts on the same VPC)

@krmayankk
Copy link
Contributor

Are all the conditions listed under Am I vulnerable?, a logical AND @joelsmith ?

@champtar
Copy link
Contributor

champtar commented Jul 9, 2020

Finally, the target node would need to have reverse-path filtering disabled

I think loose reverse-path filtering (rp_filter=2) is enough

Edit: after more testing, I think rp_filter=0 is indeed required

@champtar
Copy link
Contributor

champtar commented Jul 9, 2020

Your cluster nodes run in an environment where untrusted hosts share the same layer 2 domain (i.e. same LAN) as nodes

So just to be sure, this wouldn't be a problem in an EKS cluster running in its own VPC, right? (assuming no untrusted hosts on the same VPC)

The attack can be done from any pod with CAP_NET_RAW (this is still K8S default), so you might not be safe.

@joelsmith a more accurate title: "Node setting allows to bypass localhost boundary"

@joelsmith
Copy link
Contributor Author

We believe that of the four conditions, you might be affected if (1 and (2 or 3) and 4) are true for your cluster. Sorry that it's not super clear from the text. Trying and failing to convey info concisely and with precision makes me glad that I'm in software, not law.

@thockin
Copy link
Member

thockin commented Jul 10, 2020

Isn't this broadly fixed by #91569 ? We're discussing better answers and options for deployment, too, but I am curious what the close-condition for this bug is?

@joelsmith
Copy link
Contributor Author

@thockin I made a typo in the issue description saying that this is addressed by 81569 instead of 91569. I'll edit it now. Sorry about that. Yes, this is mostly fixed by #91569. I marked this as closed due to that PR fixing the majority of security issues caused by the net.ipv4.conf.all.route_localnet=1 setting. We can re-open if people think that we should address the remaining case that #91666 talks about.

@joelsmith
Copy link
Contributor Author

Sorry, folks. I thought that this issue was already closed. Maybe that makes my response above to @thockin less confusing.

@cjellick
Copy link
Contributor

Yes, this being open was one of the major points of confusion for us. Appreciate the clarification @thockin @joelsmith

@PushkarJ
Copy link
Member

PushkarJ commented Dec 2, 2021

/label official-cve-feed

(Related to kubernetes/sig-security#1)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet area/security committee/security-response Denotes an issue or PR intended to be handled by the product security committee. kind/bug Categorizes issue or PR as related to a bug. official-cve-feed Issues or PRs related to CVEs officially announced by Security Response Committee (SRC) sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants