-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support private topology deployment behind http proxy #2481
Comments
It has been mentioned that supporting self-hosting of resources (#715) might help fill the same need. I agree, that it partly does. In the context of an AWS VPC, in fact, I was able to get at the resources hosted on s3 by adding a VPC Endpoint for s3 and allowing access to the relevant buckets. Unfortunately, the only AWS service that provides an VPC Endpoint connection is s3 (presumably there will be others), but there is no predicting when an endpoint for API calls might be added. And I suppose in the general case, there may be other situations where the API provider needed by services is outside the private network. |
Only one known sticking point in Kubernetes itself already had an issue open here: kubernetes/kubernetes#35186 . This is the one where I needed the ip address no_proxy hack. Everything else seemed to use hostnames or just address the API server at one specific IP address, namely 100.64.0.1 |
@DerekV ping me when you have a chance, this may be coming on my plate. What about cloud provider and such? Are you putting aws API calls through a proxy |
We've gone through an exercise similar to @DerekV.
AWS API calls have to go through the proxy. We haven't built a proper API in front of our solution (we use environment variables instead of kops configuration/flags), but we'll open a pull request to share our progress until it's finished. |
Allow a kops cluster to operate behind a proxy by passing proxy configuration to addons and nodes. The API will change to be consistent with the rest of the project, but currently uses the environment variables `CLUSTER_HTTP_PROXY`, `CLUSTER_HTTPS_PROXY`, and `CLUSTER_NO_PROXY` available when and where kops is invoked. Relates to kubernetes#2481
Allow a kops cluster to operate behind a proxy by passing proxy configuration to addons and nodes. The API will change to be consistent with the rest of the project, but currently uses the environment variables `CLUSTER_HTTP_PROXY`, `CLUSTER_HTTPS_PROXY`, and `CLUSTER_NO_PROXY` available when and where kops is invoked. Relates to kubernetes#2481
Allow a kops cluster to operate behind a proxy by passing proxy configuration to addons and nodes. The API will change to be consistent with the rest of the project, but currently uses the environment variables `CLUSTER_HTTP_PROXY`, `CLUSTER_HTTPS_PROXY`, and `CLUSTER_NO_PROXY` available when and where kops is invoked. Relates to kubernetes#2481
@johnzeringue I've got a PR that I am working on |
Sorry I had my head down trying to get our cluster into production. I should have responded to @chrislovecnm publicly. |
Automatic merge from submit-queue Add support for cluster using http forward proxy #2481 Adds support for running a cluster where access to external resources must be done through an http forward proxy. This adds a new element to the ClusterSpec, `EgressProxy`, and then sets up environment variables where appropriate. Access to API servers is additionally assumed to be done through the proxy, in particular this is necessary for AWS VPCs with private topology and egress by proxy (no NAT), at least until Amazon implements VPC Endpoints for the APIs. Additionally, see my notes in #2481 TODOs - [x] Consider editing files from nodeup rather than cloudup - [x] Add support for RHEL - [x] Validate on RHEL - [x] ~Add support for CoreOS~ See #3032 - [x] ~Add support for vSphere~ See #3071 - [x] Minimize services effected - [x] ~Support seperate https_proxy configuration~ See #3069 - [x] ~Remove unvalidated proxy auth support (save for future PR)~ See #3070 - [x] Add Documentation - [x] Fill in some sensible default exclusions for the user, allow the user to extend this list - [x] Address PR review comments - [x] Either require port or handle nil - [x] ~Do API validation (or file an issue for validation)~ See #3077 - [x] Add uppercase versions of proxy env vars to cover our bases - [x] ~File an issue for unit tests~ 😬 See #3072 - [x] Validate cluster upgrades and updates - [x] Remove ftp_proxy (nothing uses)
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Support is in for this |
Currently, deploying to a private network without NAT, but with an available http proxy, will fail at a number of points.
I have coerced it to work with some light code modifications to thread environment variables and configuration files through to all the right places. I will get the code linked here once I get it cleaned up (get out hardcoded urls etc), for reference and further review.
Meanwhile the question is, does kops want to take on the goal of supporting deployment behind http_proxies? Even though we have run this way for a couple years now with only minor annoyance, I have realized a number of implications about this configuration and supporting it while trying to get it to work with kops. Here's the rub: to really do it right, I believe that some discipline will need to be maintained around ensure support is propagated correctly to everywhere http calls (including, for example, AWS or other provider API calls).
Generally, the problem is that http_proxy support must be implemented by every http client, and there is no universal, standard way to inject http_proxy configuration. Go net/http will respect http_proxy, https_proxy, and no_proxy environment variables in more or less the same way that wget and curl interpret them, and this is how I was able to get it to work. The problem is that, if you just set http_proxy, http clients will try to forward every single http request to the proxy, even requests for things located in the same private subnet as Kubernetes, on the overlay network itself, or on some other private IP address. It does this regardless if the http request uses a hostname or an IP address.
The no_proxy environment variable allows you to exclude hostnames and hostname suffixes. It matches on hostname suffix only, eg
shoes.example.com,.mycorp.com,.mynet.local
would excludeprivateserver.mynet.local
. So really the core issue behind the core issue is that for some reason, that never seemed weird to me until now, is that ip addresses are expressed most significant "domain" first, but hostnames express them last, and the no_proxy convention assumes only hostnames. It has to provision to handle ip addresses, where the prefix is the most significant, nor does it understand ranges, globs, nor CIDR notation. This holds for curl, wget, and by extension, go. I'm guessing getting all three to change would not be an easy mountain to climb. I have been able to hack around this by exuding all ip address urls by including every ending ip in no_proxy.0,.1,.2
....255
, but this feels like a giant hack, is likely fragile (someone may need to address an external server by IP address at some point, which would then break this hack).So environment variables might not be the best answer ultimately. Perhaps maintaining a light wrapper around net/http
Here is a rough list of places I modified in the process of getting it to work:
/etc/environment
(can be done in cloud-init)/etc/default/docker
for docker daemon, can be done in cloud-init (can be done in cloud-init)echo "Acquire::http::Proxy \"http://${PROXY}\";" > /etc/apt/apt.conf.d/30proxy
(can be done in cloud-init)echo DefaultEnvironment=http_proxy=${PROXY} https_proxy=${PROXY} ftp_proxy=${PROXY} no_proxy=${NOPROXY} >> /etc/systemd/system.conf
(can be done in cloud-init)nodeup/pkg/model/protokube.go
to pass along environment variables (next to the s3 environment variables already being passed hereThe text was updated successfully, but these errors were encountered: