Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't open a shell on a pod from behind a corporate proxy #6687

Open
tetsuhaut opened this issue Dec 1, 2022 · 11 comments
Open

Can't open a shell on a pod from behind a corporate proxy #6687

tetsuhaut opened this issue Dec 1, 2022 · 11 comments
Labels
area/terminal bug Something isn't working

Comments

@tetsuhaut
Copy link

tetsuhaut commented Dec 1, 2022

Describe the bug
I am behind a corporate proxy.
When trying to open a shell on a pod from my system console (outside of Lens), it works:

>kubectl -n <some namespace> --context=<some context> exec -it <some pod name> -- bash
root@<some pod name>:/app#

When trying to connect a shell to a pod from Lens, it displays

Connecting...

Then falls back on my system shell prompt. If I run the command above from that shell, the following error is displayed:

error: unable to upgrade connection: error dialing backend: dial tcp <some IP adress>:443: i/o timeout

My kubeconfig

apiVersion: v1
kind: Config
preferences: {}
current-context: <some context name>
clusters:
- cluster:
    certificate-authority-data: <some certificate>
    server: <some server in the cloud>
  name: <some cluster name>
contexts:
- context:
    cluster: <some cluster name>
    namespace: <some namespace>
    user: <some user>
  name: <some profile name>
users:
- name: <some user>
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - --region
      - eu-west-3
      - eks
      - get-token
      - --cluster-name
      - <some cluster name>
      - --ca-bundle
      - /C/Users/<some user id>/certs/zscaler_root_ca.cer
      command: C:\softs\Amazon\AWSCLIV2\aws.exe
      env:
      - name: AWS_PROFILE
        value: <some aws profile name>
      interactiveMode: IfAvailable
      provideClusterInfo: false

Additional context
The commands kubectl done on my desktop shell works behind the corporate proxy because I have provided the http_proxy, https_proxy, no_proxy and AWS_CA_BUNDLE environment variables.

It seems that Lens does not use the same conf as my local kubectl/aws command line tools.
Thanks for your help !

@Nokel81 Nokel81 added bug Something isn't working area/terminal labels Dec 5, 2022
@Nokel81
Copy link
Collaborator

Nokel81 commented Dec 5, 2022

If you open up a local terminal within Lens and then run the env command are all your environment variables there as expected?

@tetsuhaut
Copy link
Author

tetsuhaut commented Dec 16, 2022

A few additional variables are added by Lens to the env: APP_HTTPS_PROXY (same value than my HTTPS_PROXY), CHROME_CRASHPAD_PIPE_NAME, KUBECONFIG, LENS_SESSION, ORIGINAL_XDG_CURRENT_DESKTOP, PTYPID, PTYSHELL, TERM_PROGRAM, TERM_PROGRAM_VERSION, WSLENV

Looking at the value of KUBECONFIG, it is a kubectl configuration file created by Lens I guess, which changes the cluster server URL and removes the certificate-authority-data parameter. This custom conf is probably the reason of the error : the same command that fails in the console opened by Lens succeeds in my console, that uses my kubectl conf file with the right server url and certificate-authority-data value.

@tetsuhaut
Copy link
Author

Adding the variable certificate-authority-data and its value to the Lens specific kubectl config file, the connection to pod's shell works perfectly. Therefore I suggest Lens adds the certificate-authority-data variable, when found in the local kubectl config file, to its generated config file.

@Nokel81
Copy link
Collaborator

Nokel81 commented Dec 16, 2022

kubectl shouldn't be trying to go through a proxy at that point. We set the NO_PROXY env var to include localhost and 127.0.0.1. When you open a local shell session, what do you see of that env var?

@tetsuhaut
Copy link
Author

yes, NO_PROXY is present and contains localhost,127.0.0.1, and some more IPs related to my company

@Nokel81
Copy link
Collaborator

Nokel81 commented Dec 16, 2022

If you run kubectl proxy -p 8080 in one terminal and then create a temp config with the following shape:

current-context: <some context name>
clusters:
 - name: <some context name>
   cluster:
    server: http://localhost:8080
users:
  - name: proxy
contexts:
  - name: <some context name>
    context:
      cluster: <some context name>
      user: proxy

and then run NO_PROXY=localhost,127.0.0.1,$NO_PROXY kubectl exec -it <some-pod-name> -- bash

What do you see?

@tetsuhaut
Copy link
Author

If the temp config contains the variable certificate-authority-data : a shell is opened on the pod.
If the temp config does not contain the variable certificate-authority I get the following error message:
Unable to connect to the server: x509: certificate signed by unknown authority

@tetsuhaut
Copy link
Author

My corporate proxy modifies the SSL certification chain by replacing the root Certification Authority (CA)'s certificate signature with its own CA certificate, which I provide to kubectl through the value of certificate-authority-data.

@Nokel81
Copy link
Collaborator

Nokel81 commented Dec 16, 2022

Interesting, thanks for doing that experiment. I wonder why we are even hitting your corporate proxy...

@tetsuhaut
Copy link
Author

lens runs on my machine, which uses the proxy as a system proxy. kubernetes runs in the cloud. the proxy is between the 2 :)

@Nokel81
Copy link
Collaborator

Nokel81 commented Dec 16, 2022

Yes, but I would have suspected that kubectl proxy would pick up the CA for that leg of the communication and the kubectl exec wouldn't need to know about it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/terminal bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants