Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubelet tries to use system-wide http_proxy setting for communicating with local pods #48792

Closed
ananace opened this issue Jul 12, 2017 · 10 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@ananace
Copy link

ananace commented Jul 12, 2017

/kind bug

What happened:
An additional proxy server was set up for our internal Kubernetes cluster, to replace one that had been running as an internet-accessible Kubernetes node.
This means that the proxy no longer has access to the Kubernetes CIDRs, which the old one had.

At this point, liveness and readyness checks started failing for all pods, corresponding to entries in the proxy access log;

1499844148.511    998 10.133.0.10 TCP_MISS_ABORTED/000 0 GET http://10.254.73.2:10254/healthz - HIER_DIRECT/10.254.73.2 -

What you expected to happen:
Kubelet ignores proxy settings when trying to communicate with pods hosted on the local machine.

How to reproduce it (as minimally and precisely as possible):

$ cat /etc/systemd/system.conf.d/01-proxy-environment.conf 
[Manager]
DefaultEnvironment="http_proxy=http://proxy-server.localdomain:3128"

Environment:

  • Kubernetes version (use kubectl version): v1.5.2
  • Cloud provider or hardware configuration: Bare-metal
  • OS (e.g. from /etc/os-release): CentOS Linux release 7.3.1611 (Core)
  • Kernel (e.g. uname -a): 3.10.0-514.26.2.el7.x86_64
@k8s-ci-robot k8s-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jul 12, 2017
@k8s-github-robot
Copy link

@Ace13 There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
e.g., @kubernetes/sig-api-machinery-* for API Machinery
(2) specifying the label manually: /sig <label>
e.g., /sig scalability for sig/scalability

Note: method (1) will trigger a notification to the team. You can find the team list here and label list here

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jul 12, 2017
@xiangpengzhao
Copy link
Contributor

/sig node
/sig network

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. sig/network Categorizes an issue or PR as relevant to SIG Network. labels Jul 12, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jul 12, 2017
@cmluciano
Copy link

@kubernetes/sig-network-misc

@duan-yue
Copy link
Contributor

For merely livenessProbe, I find the proxy is set here , it only reads NO_PROXY and excludes ip in NO_PROXY. So we can exclude all hosts who have the same cidr as k8s cluster. If you think this solution is feasible, I would like to submit such a patch. @xiangpengzhao @cmluciano

@ananace
Copy link
Author

ananace commented Jul 17, 2017

If kubelet is able to append the cluster CIDR to its NO_PROXY variable itself then that would definitely solve the issue for me.

Certainly more appropriate than my current workaround of running a tainted k8s node on the proxy server.

@duan-yue
Copy link
Contributor

It seems that we can not get CIDR through API in this issue #25533 . sad.

@2rs2ts
Copy link
Contributor

2rs2ts commented Nov 27, 2017

@2rs2ts
Copy link
Contributor

2rs2ts commented Nov 27, 2017

Sorry, I am incorrect. It does work for kubelet, but not for curl. I figured my tests with curl were sufficient.

@liggitt
Copy link
Member

liggitt commented Jan 14, 2018

If kubelet is able to append the cluster CIDR to its NO_PROXY variable itself then that would definitely solve the issue for me.

using the configured proxy settings as-is is correct. kubelet cannot assume you don't want to go through the configured proxy in order to reach the pod network.

FYI it is not possible to use a CIDR in NO_PROXY. https://unix.stackexchange.com/questions/23452/set-a-network-range-in-the-no-proxy-environment-variable

kubernetes components support CIDR in NO_PROXY. See #23003

@liggitt liggitt closed this as completed Jan 14, 2018
@jayunit100
Copy link
Member

jayunit100 commented Mar 11, 2019

kubelet cannot assume you don't want to go through the configured proxy in order to reach the pod network.

@liggitt

  1. it makes sense that right now we cannot do this.

  2. But... In the future, could we make the proxy more configurable ? Example of where it would be nice to have this bifurcated traffic: some proxy's are slow. I recently saw some sputtering performance in a control plane and , although I never confirmed, i noticed that this might be correlated to air-gapped kubernetes clusters that have proxying turned on...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/network Categorizes an issue or PR as relevant to SIG Network. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

9 participants