Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

http_proxy configured for AppRepository is not used during deployment #1962

Closed
absoludity opened this issue Aug 25, 2020 · 4 comments · Fixed by #1966
Closed

http_proxy configured for AppRepository is not used during deployment #1962

absoludity opened this issue Aug 25, 2020 · 4 comments · Fixed by #1966
Projects

Comments

@absoludity
Copy link
Contributor

absoludity commented Aug 25, 2020

Description:

Installing a chart from a proxied app repository fails.

When correctly configured (with a syncJobPodTemplate defining the http_proxy env var appropriately) the sync jobs succeed and populate the catalog correctly. But when deploying a chart from that catalog, the backend tries to fetch the chart and fails as though it is not using the proxy.

This was reported in an actual demo environment, but I managed to reproduce locally as follows:

Steps to reproduce the issue:

  1. Deploy Kubeapps in Kind, then
  2. Add the following IP tables rules to drop packets sent from docker to charts.bitnami.com

Check which docker network you are using and assign KIND_BRIDGE to the matching interface listed in ip link show, then (replacing docker0 if necessary):

KIND_BRIDGE=br-71c284006360
sudo iptables --insert DOCKER-USER --in-interface $KIND_BRIDGE --destination 13.33.67.10 --jump DROP
sudo iptables --insert DOCKER-USER --in-interface $KIND_BRIDGE --destination 13.33.67.73 --jump DROP
sudo iptables --insert DOCKER-USER --in-interface $KIND_BRIDGE --destination 13.33.67.88 --jump DROP
sudo iptables --insert DOCKER-USER --in-interface $KIND_BRIDGE --destination 13.33.67.102 --jump DROP

so that any packets coming from the docker0 interface destined for IP addresses associated with charts.bitnami.com are dropped (check current IP addresses with host charts.bitnami.com).

  1. Verify that you can hit charts.bitnami.com locally, but not from one of the pods:
curl -sI https://charts.bitnami.com/bitnami/index.yaml | head -n1
HTTP/2 200
kubectl -n kubeapps exec -ti deployment/kubeapps -- curl -I https://charts.bitnami.com/bitnami/index.yaml
...
curl: (7) Failed to connect to charts.bitnami.com port 443: Connection timed out
command terminated with exit code 7

Verify that you can't deploy Apache from the bitnami catalog and that an app repeo refresh fails:

kubectl -n kubeapps logs apprepo-kubeapps-sync-bitnami-v2rgg-p4k8n
time="2020-08-25T02:30:52Z" level=error msg="error requesting repo index" error="Get https://charts.bitnami.com/bitnami/index.yaml: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" url="https://charts.bitnami.com/bitnami/index.yaml"
time="2020-08-25T02:30:52Z" level=fatal msg="Get https://charts.bitnami.com/bitnami/index.yaml: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  1. Add a proxy to the app repository configuration

Run tinyproxy on a different docker network:

docker run vimagick/tinyproxy

Check for the ip address of the container using docker inspect <...> and then verify that the proxy works from your host, for example:

curl --proxy http://172.17.0.2:8888 -sI https://charts.bitnami.com/bitnami/index.yaml

You should see the logs on tinyproxy as the request gets proxied. Next we need to edit IPtables again to allow traffic from the docker network on which the kind cluster is running and the network on which tinyproxy is running:

export DEFAULT_BRIDGE=docker0
sudo iptables --insert DOCKER-ISOLATION-STAGE-2 --in-interface $KIND_BRIDGE --out-interface $DEFAULT_BRIDGE   --jump ACCEPT
sudo iptables --insert DOCKER-ISOLATION-STAGE-2 --in-interface $DEFAULT_BRIDGE --out-interface $KIND_BRIDGE   --jump ACCEPT

Then verify that you can use the proxy from a kubeapps pod:

kubectl -n kubeapps exec -ti deployment/kubeapps -- curl --proxy http://172.17.0.2:8888 -I https://charts.bitnami.com/bitnami/index.yaml
  1. Add the proxy to the app repository settings

Edit the app repository in Kubeapps and add the following to the custom sync job template:

spec:
  containers:
    - env:
      - name: http_proxy
        value: http://172.17.0.2:8888

and submit (need to wait for the validation to fail then force, would be easier via CLI)

  1. Verify that the sync job works
    Verify that the new sync job that ran completed without issue (ie. the proxy is used when syncing the charts for the repo) with k -n kubeapps get po

  2. Deploy apache from the repo

Go to the catalog and select apache from the bitnami catalog and deploy.

Describe the results you received:

When the chart is fetched for deployment, the proxy is not used and so the deployment fails with:

deploy-doesnt-use-proxy

Describe the results you expected:

If an http proxy is configured for the repo so that Kubeapps imports and displays these charts in the catalog, then the proxy should be used to fetch the chart when deploying also.

Additional information you deem important (e.g. issue happens only occasionally):

I suspected that this had never been implemented, but after meticulously reproducing in my local env, I then searched Kubeapps issues and see it had come up before at #696 (comment) with a patch added to tiller-proxy in #1097, but this doesn't actually fix the issue, as it relies on manually seting the http_proxy env of the backend pod (in that case, tiller-proxy). So I do indeed suspect this has never been implemented :/

Restore your iptables by deleting the above rules, or you can delete all rules for DOCKER-USER with

sudo iptables -L DOCKER-USER # to check
sudo iptables --delete DOCKER-USER 4
sudo iptables --delete DOCKER-USER 3
sudo iptables --delete DOCKER-USER 2
sudo iptables --delete DOCKER-USER 1

sudo iptables -L DOCKER-ISOLATION-STAGE-2 # to check
sudo iptables --delete DOCKER-ISOLATION-STAGE-2 2
sudo iptables --delete DOCKER-ISOLATION-STAGE-2 1
@project-bot project-bot bot added this to Inbox in Kubeapps Aug 25, 2020
@andresmgot
Copy link
Contributor

Awesome work reproducing the issue! Note that the same issue can happen with other things like the NodeSelector, sync jobs will be created in the correct node but then kubeops will try to download the chart anyway. People usually find this issue and just need to set this proxy/selector for one place so they modify the kubeops deployment, that's why it has not been implemented before.

IMO the cleanest solution would be to create a pod using the syncJobTemplate to download and install the chart but that's quite of a big change so I am curious on what you will propose.

@absoludity
Copy link
Contributor Author

People usually find this issue and just need to set this proxy/selector for one place so they modify the kubeops deployment, that's why it has not been implemented before.

In this case it came up when people were trying to create a hands-on lab where Kubeapps is installed and configured by the user to use a specific app repository which, out of necessity, requires a proxy. So including instructions for users to manually edit the deployment and add a 3 non-trivial env vars wouldn't be great for the experience.

IMO the cleanest solution would be to create a pod using the syncJobTemplate to download and install the chart but that's quite of a big change so I am curious on what you will propose.

I'm planning on updating kube.InitNetClient(), which already receives the app repo as an arg, so that it returns a client with the same proxy set in the transport from the app repo (ie. the same proxy as used for the sync job). That means we don't need to worry about the no_proxy options, as it's just used when parsing and getting the chart (need to double-check, but that's the approach in outline). Let me know if you see any issues, or want to have a go, or whatever, otherwise I'll be trying that tomorrow.

@andresmgot
Copy link
Contributor

I'm planning on updating kube.InitNetClient(), which already receives the app repo as an arg, so that it returns a client with the same proxy set in the transport from the app repo (ie. the same proxy as used for the sync job). That means we don't need to worry about the no_proxy options, as it's just used when parsing and getting the chart (need to double-check, but that's the approach in outline). Let me know if you see any issues, or want to have a go, or whatever, otherwise I'll be trying that tomorrow.

That will probably work if we have access to the whole app repo (even though is a bit tricky to just look for http_proxy or https_proxy in the pod template). It won't work for the NodeSelector scenario but I'm okay doing that as simple solution.

@absoludity
Copy link
Contributor Author

Yeah, I don't see the nodeselector scenario as an issue right now. We already send the expected auth headers if they are configured for an app repository, so it makes sense to use the proxy. I agree with your longer-term ideal: if users can specify that the charts can only be synced/fetched from a specific node, then we should be fetching (not just syncing) from the same in an async job or similar (which should be quick enough, and only required if a nodeselector is present etc).

@absoludity absoludity moved this from Inbox to Waiting For Review in Kubeapps Aug 26, 2020
Kubeapps automation moved this from Waiting For Review to Done Aug 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
No open projects
Kubeapps
  
Done
Development

Successfully merging a pull request may close this issue.

2 participants