Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reverse proxy with random ports #11

Closed
micheldlebeau opened this issue Mar 3, 2022 · 8 comments
Closed

Reverse proxy with random ports #11

micheldlebeau opened this issue Mar 3, 2022 · 8 comments

Comments

@micheldlebeau
Copy link

Hi @joyrex2001, really enjoying kubedock so far!

We are trying to move away from using --port-forward, replacing it with --reverse-proxy, unfortunately we have a bunch of TestContainers tests which need to communicate with the container via random ports.
We're seemingly hitting a wall here with --reverse-proxy, with the TestContainers tests ending up failing with timeouts, whereas it works out of the box with --port-forward.

Do you have any suggestion for this usecase? This might simply be that I do not fully understand how --reverse-proxy is supposed to work as there isn't really a lot of documentation on this flag, so feel free to correct me if it isn't designed for this.
Alternatively, what makes --port-forward unreliable, and is it addressable?

We would also like to host kubedock on our cluster, while running the tests remotely on our CI platform, however that requires an extra layer of proxying between kubedock and our CI with something like kubectl port-forward, which makes this problem even worse. Have you thought about this scenario as well?

@joyrex2001
Copy link
Owner

The port-forward is tunnels all traffic that is directed to that pod via http and the kubernetes api-server. This overhead makes it slower, and sometimes the connection breaks (which isn't automatically restored).

If you're outside the cluster, this is (currently) the only way to connect to these created pods (because they are not exposed via e.g. an ingress).

When kubedock creates pods, it will also create a service resource. This service resource is accessible inside the cluster, which means if kubedock is running inside that same cluster/namespace, it will be able to directly connect to the pods via the service. That's basicly what the reverse-proxy argument does; it will proxy all requests it gets to these services inside the cluster.

Does this make a bit more sense?

@micheldlebeau
Copy link
Author

Yes that makes sense, thanks for the explanation.

As we are outside the cluster, that means we're stuck with port-forwarding for now. The lower speed isn't a dealbreaker, however the connection sometimes breaking is a problem.
Do you think it would be worth the effort to automatically re-establish the connection if it broke when port forwarding?

@joyrex2001
Copy link
Owner

I did some experimentation in the past, but wasn't very successful myself :-/ Most of it is handled in the kubernetes library, see here. I am open for solution suggestions though.

@micheldlebeau
Copy link
Author

I'd need to experiment, I likely won't be able to in the near future so it might be better to close this for now, but I'll keep it in the back of my mind.

@joyrex2001
Copy link
Owner

Ok, I will close the issue. Thanks so far! :-)

@renjfk
Copy link

renjfk commented Nov 11, 2023

Sorry for raising the zombie thread but I'm wondering if this could be resolved with LoadBalancer. 🤔

@joyrex2001
Copy link
Owner

A loadbalancer will indeed expose the port publicly, however depending on your k8s deployment this can take a while to be ready and cost money. Alternatively, ingress solutions can cater similar functionality (or e.g. routes in openshift). That might be a better option if going into that direction (but is hard to generalise).

@renjfk
Copy link

renjfk commented Nov 13, 2023

In a truly public cloud yes, but in a private cloud environment that wouldn't be the case where external IPs are not public but rather something from VNET external IP pool.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants