Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't connect to DO cluster : http proxy error #699

Open
xavierfnk opened this issue Aug 17, 2020 · 22 comments
Open

Can't connect to DO cluster : http proxy error #699

xavierfnk opened this issue Aug 17, 2020 · 22 comments
Labels
area/linux bug Something isn't working

Comments

@xavierfnk
Copy link

xavierfnk commented Aug 17, 2020

Describe the bug
Hello everyone, I cannot connect to my DigitalOcean cluster. After selecting my kubeconfig and trying to connect to the cluster, I get the error message "http: proxy error: getting credentials: exec: exit status 1".

To Reproduce

  1. Start Lens
  2. Click on cluster icon to try to connect

Expected behavior
As explained in various tutorials, no further configuration is needed so it should just work.

Screenshots

Environment (please complete the following information):

  • Lens Version: 3.5.3
  • OS: Ubuntu 20.04
  • Installation method: snap

Logs:

error: Failed to connect to cluster do-fra1-k8s-dev: {"name":"StatusCodeError","statusCode":502,"message":"502 - \"getting credentials: exec: exit status 1\"","error":"getting credentials: exec: exit status 1","options":{"json":true,"timeout":10000,"headers":{"host":"3cd200fd-7272-4999-8507-34815abcce79.localhost:38215"},"uri":"http://127.0.0.1:38215/api-kube/version","simple":true,"resolveWithFullResponse":false,"transform2xxOnly":false},"response":{"statusCode":502,"body":"getting credentials: exec: exit status 1","headers":{"content-type":"text/plain","date":"Mon, 17 Aug 2020 09:43:18 GMT","connection":"close","transfer-encoding":"chunked"},"request":{"uri":{"protocol":"http:","slashes":true,"auth":null,"host":"127.0.0.1:38215","port":"38215","hostname":"127.0.0.1","hash":null,"search":null,"query":null,"pathname":"/api-kube/version","path":"/api-kube/version","href":"http://127.0.0.1:38215/api-kube/version"},"method":"GET","headers":{"host":"[hidden].localhost:38215","accept":"application/json"}}}}

Kubeconfig:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: [hidden]
    server: https://[hidden].k8s.ondigitalocean.com
  name: do-fra1-k8s-dev
contexts:
- context:
    cluster: do-fra1-k8s-dev
    user: do-fra1-k8s-dev-admin
  name: do-fra1-k8s-dev
current-context: do-fra1-k8s-dev
kind: Config
preferences: {}
users:
- name: do-fra1-k8s-dev-admin
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - kubernetes
      - cluster
      - kubeconfig
      - exec-credential
      - --version=v1beta1
      - --context=default
      - [hidden]
      command: doctl
      env: null

Thanks in advance for your help !

@nevalla
Copy link
Contributor

nevalla commented Aug 25, 2020

You can try to use the full path of doctl in kubeconfig.

@xavierfnk
Copy link
Author

Using /snap/bin/doctl instead of just doctl is giving me the same error.

@nevalla
Copy link
Contributor

nevalla commented Aug 25, 2020

@jakolehm do you know are there still some issues related to snap regarding this?

@xavierfnk
Copy link
Author

I also tried running the command doctl kubernetes cluster kubeconfig exec-credential --version=v1beta1 --context=default ***-***-*** to see the output :

{"kind":"ExecCredential","apiVersion":"client.authentication.k8s.io/v1beta1","spec":{},"status":{"expirationTimestamp":"2020-08-31T14:06:33Z","token":"****************"}}

Looks like there isn't any problem with this.

@jakolehm
Copy link
Contributor

@nevalla no known issues. @Ridzu95 does it work if you start Lens from a terminal where kubectl works?

@xavierfnk
Copy link
Author

@jakolehm The logs I mentioned in my first message are the ones I get as an output when running from a terminal. So no it doesn't work unfortunately.

@nevalla
Copy link
Contributor

nevalla commented Aug 25, 2020

Got this reproduced and I think I found the reason too. When opening Lens, snap will set XDG_CONFIG_HOME env variable to point to Lens's snap sandbox:

declare -x XDG_CONFIG_HOME="/home/parallels/snap/kontena-lens/110/.config"

and doctl uses that env var to determine default config file:

  -c, --config string         Specify a custom config file (default "/home/parallels/snap/kontena-lens/110/.config/doctl/config.yaml")

And the issue is that there is no access token in that config file.

I think the workaround is to add --config=/path/to/doctl/config.yaml to kubeconfig file and re-add the cluster to the Lens. Alternatively you can do doctl auth init in Lens terminal if you have some working cluster available.

@jakolehm
Copy link
Contributor

@nevalla could we override XDG_CONFIG_HOME env for kubectl when we detect snap environment?

@nevalla
Copy link
Contributor

nevalla commented Aug 25, 2020

Maybe. I think ~/.config would be the correct path.

@xavierfnk
Copy link
Author

xavierfnk commented Aug 25, 2020

Update : I did not manage to make it work by adding a --config argument to my kubeconfig, but I can connect to my cluster by creating the doctl folder in Lens's snap sandbox and a symbolic link :

$ mkdir /home/$USER/snap/kontena-lens/110/.config/doctl
$ ln -s /home/$USER/.config/doctl/config.yaml /home/$USER/snap/kontena-lens/110/.config/doctl/config.yaml

This solution is not ideal but it works fine, thanks to @nevalla indications.

@nevalla
Copy link
Contributor

nevalla commented Aug 25, 2020

Right...I might have created that dir too during debugging.

@kveratis
Copy link

@Ridzu95 that solution was a real lifesaver for me today, thank you.

@nikolaigut
Copy link

Update : I did not manage to make it work by adding a --config argument to my kubeconfig, but I can connect to my cluster by creating the doctl folder in Lens's snap sandbox and a symbolic link :

$ mkdir /home/$USER/snap/kontena-lens/110/.config/doctl
$ ln -s /home/$USER/.config/doctl/config.yaml /home/$USER/snap/kontena-lens/110/.config/doctl/config.yaml

This solution is not ideal but it works fine, thanks to @nevalla indications.

This solution works for me.

@nevalla is it possible to fix this problem with the XDG_CONFIG_HOME in a future release?

@antsankov
Copy link

Just to add on, I was having simialr issues as well as http: proxy error: proxyconnect tcp: dial tcp [::1]:8001: connect: connection refused, and I solved it by opening Lens in sudo.

sudo DEBUG=true /Applications/Lens.app/Contents/MacOS/Lens

Turns out Lens, started by just clicking the Mac icon ,without sudo didn't have the permissions to start the auth proxy it needs! Works perfectly now that I open it with sudo from the terminal.

@matzza
Copy link

matzza commented Jan 31, 2021

@antsankov

@holms
Copy link

holms commented Feb 2, 2021

Got the same problem :) Any chance someone would do PR :)?

@dingdayu
Copy link

Update : I did not manage to make it work by adding a --config argument to my kubeconfig, but I can connect to my cluster by creating the doctl folder in Lens's snap sandbox and a symbolic link :

$ mkdir /home/$USER/snap/kontena-lens/110/.config/doctl
$ ln -s /home/$USER/.config/doctl/config.yaml /home/$USER/snap/kontena-lens/110/.config/doctl/config.yaml

This solution is not ideal but it works fine, thanks to @nevalla indications.

This solution works for me.

@nevalla is it possible to fix this problem with the XDG_CONFIG_HOME in a future release?

Yes, my problem is on AWS. I guess it is because of the need to use aws-cli to authenticate the k8s cluster.

I wonder if there is a more elegant solution to this problem?

@rgutwein
Copy link

Having issues with my team accessing the cluster connect feature in Lens. I am guessing this has to do with the ~/Kubeconfig file - currently using mini kube on M1 and I created a teams space - and allowed access to specific users. They can see the teams space but cannot access the cluster.

I think what happens..still doing some research.. but when you add a cluster it pulls off your kube.config. when you create your space I think you need a new config that then shares throughout the space. When my team tries to access the clusters I shared it gives them an SSL error which is directly from the config.

When you go to your space I think it pulls your config... I think what needs to happen is you pull a new config in the space and promote it. Then the users you invite to the space use the same and you resolve the proxy error.

Please advise

BC74D707-3B19-418E-97A6-DE4225CFC96D

@LinTechSo
Copy link

Hi any updates?

@hammaddaoud
Copy link

**dingdayu ** commented on Oct 27, 2021

You find solution for AWS?? I'm getting same. kubectl is able to connect with server while configuring Lens through error "Unable to locate credentials. You can configure credentials by running "aws configure"."

@marciojg
Copy link

Same Issue here.

kubectl is ok but by lens a have the same error message:

Screenshot_20220723_161618

@william-bohannan
Copy link

Just a note on the previous replies as version 110 is quite old now. If you use "current" instead of "110" it will patch the installed version.

doctl auth init
doctl account get
mkdir -p /home/$USER/snap/kontena-lens/current/.config/doctl
ln -s /home/$USER/.config/doctl/config.yaml /home/$USER/snap/kontena-lens/current/.config/doctl/config.yaml

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/linux bug Something isn't working
Projects
None yet
Development

No branches or pull requests