Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tunnel-client "Failed to connect to proxy" errors #7

Closed
juan-lee opened this issue Oct 6, 2019 · 6 comments
Closed

tunnel-client "Failed to connect to proxy" errors #7

juan-lee opened this issue Oct 6, 2019 · 6 comments
Labels

Comments

@juan-lee
Copy link
Contributor

juan-lee commented Oct 6, 2019

While going through the blog post I encountered errors with the tunnel client not being able to connect to the droplet or other tunnel server. The problem seems to be due to the dataplane, control, and service ports being the same.

Expected Behaviour

Tunnel client should be able to connect to the tunnel server to expose a private service via an inlet.

Current Behaviour

The tunnel client pod logs the following errors.

time="2019-10-06T16:12:26Z" level=info msg="Connecting to proxy" url="ws://<redacted>:80/tunnel"
time="2019-10-06T16:12:26Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"
time="2019-10-06T16:12:26Z" level=error msg="Failed to connect to proxy" error="websocket: bad handshake"

Possible Solution

Allow specifying a control-port other than 80.

Steps to Reproduce (for bugs)

Follow steps from here using the digital ocean provisioner.

Context

I was experimenting with implementing an azure provisioner and encountered these problems. At first I thought it was a problem with my azure setup, but then I tried with the digital ocean provisioner and encountered the same errors.

Your Environment

Reproduced using the same steps from the blog post using kind on Ubuntu 18.04.

  • inlets version inlets --version 2.4.1

  • Docker/Kubernetes version docker version / kubectl version: v1.15.3

  • Operating System and version (e.g. Linux, Windows, MacOS): Ubuntu 18.04

  • Link to your project or a code example to reproduce issue: reproduced on master

@alexellis
Copy link
Member

alexellis commented Oct 6, 2019

Hi @juan-lee thank you for getting in touch.

What Kubernetes distribution are you using?

I've used KinD, k3s, kubeadm, and k3d successfully without issue.

If you suspect issues with the control-port, you can edit the server's systemd unit file to use --control-plane 8080 and the client's connection URL to point to that port.

Want to give that a try?

Alex

@alexellis
Copy link
Member

@alexellis
Copy link
Member

Steps to Reproduce (for bugs)

Quoting the blog post for this isn't helpful because it clearly works for around half a dozen people already and there's a video showing it too.

Could you give more info? i.e.


## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->

* inlets version, find via `kubectl get deploy inlets-operator -o wide`


* Kubernetes distribution i.e. minikube v0.29.0., KinD v0.5.1, Docker Desktop:


* Kubernetes version `kubectl version`:


* Operating System and version (e.g. Linux, Windows, MacOS):


* Cloud provisioner: (DigitalOcean.com / Packet.com)

alexellis added a commit that referenced this issue Oct 6, 2019
Although this only occurs with KinD, I've changed the control port
from the same as the data-plane to an alternative port. It appears
to unblock KinD and fix an issue reported by a user.

Fixes: #7

Signed-off-by: Alex Ellis (OpenFaaS Ltd) <alexellis2@gmail.com>
@alexellis
Copy link
Member

Can you try the add-gc branch and the custom build I made - I'm assuming you're using KinD?

@juan-lee
Copy link
Contributor Author

juan-lee commented Oct 7, 2019

@alexellis I confirmed the code in master now works as advertised. I actually got it working for myself while building a proof-of-concept azure container instance provisioner. I just discovered inlets yesterday on @lachie83's recommendation. Thanks for sharing and great work! I'd love to help out if you're open to it.

@alexellis
Copy link
Member

It's merged into master now and ready to go, if you want to try again.

Happy to review your PR this week 👍

Delete is now implemented too.

Alex

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants