New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
http_proxy configured for AppRepository is not used during deployment #1962
Comments
Awesome work reproducing the issue! Note that the same issue can happen with other things like the IMO the cleanest solution would be to create a pod using the syncJobTemplate to download and install the chart but that's quite of a big change so I am curious on what you will propose. |
In this case it came up when people were trying to create a hands-on lab where Kubeapps is installed and configured by the user to use a specific app repository which, out of necessity, requires a proxy. So including instructions for users to manually edit the deployment and add a 3 non-trivial env vars wouldn't be great for the experience.
I'm planning on updating |
That will probably work if we have access to the whole app repo (even though is a bit tricky to just look for |
Yeah, I don't see the nodeselector scenario as an issue right now. We already send the expected auth headers if they are configured for an app repository, so it makes sense to use the proxy. I agree with your longer-term ideal: if users can specify that the charts can only be synced/fetched from a specific node, then we should be fetching (not just syncing) from the same in an async job or similar (which should be quick enough, and only required if a nodeselector is present etc). |
Description:
Installing a chart from a proxied app repository fails.
When correctly configured (with a syncJobPodTemplate defining the
http_proxy
env var appropriately) the sync jobs succeed and populate the catalog correctly. But when deploying a chart from that catalog, the backend tries to fetch the chart and fails as though it is not using the proxy.This was reported in an actual demo environment, but I managed to reproduce locally as follows:
Steps to reproduce the issue:
Check which docker network you are using and assign KIND_BRIDGE to the matching interface listed in
ip link show
, then (replacing docker0 if necessary):so that any packets coming from the docker0 interface destined for IP addresses associated with charts.bitnami.com are dropped (check current IP addresses with
host charts.bitnami.com
).Verify that you can't deploy Apache from the bitnami catalog and that an app repeo refresh fails:
Run tinyproxy on a different docker network:
Check for the ip address of the container using
docker inspect <...>
and then verify that the proxy works from your host, for example:You should see the logs on tinyproxy as the request gets proxied. Next we need to edit IPtables again to allow traffic from the docker network on which the kind cluster is running and the network on which tinyproxy is running:
Then verify that you can use the proxy from a kubeapps pod:
Edit the app repository in Kubeapps and add the following to the custom sync job template:
and submit (need to wait for the validation to fail then force, would be easier via CLI)
Verify that the sync job works
Verify that the new sync job that ran completed without issue (ie. the proxy is used when syncing the charts for the repo) with
k -n kubeapps get po
Deploy apache from the repo
Go to the catalog and select
apache
from the bitnami catalog and deploy.Describe the results you received:
When the chart is fetched for deployment, the proxy is not used and so the deployment fails with:
Describe the results you expected:
If an http proxy is configured for the repo so that Kubeapps imports and displays these charts in the catalog, then the proxy should be used to fetch the chart when deploying also.
Additional information you deem important (e.g. issue happens only occasionally):
I suspected that this had never been implemented, but after meticulously reproducing in my local env, I then searched Kubeapps issues and see it had come up before at #696 (comment) with a patch added to tiller-proxy in #1097, but this doesn't actually fix the issue, as it relies on manually seting the
http_proxy
env of the backend pod (in that case, tiller-proxy). So I do indeed suspect this has never been implemented :/Restore your
iptables
by deleting the above rules, or you can delete all rules for DOCKER-USER withThe text was updated successfully, but these errors were encountered: