Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] Exposing the forwarded ports to a different pod in K8s #245

Open
isurulucky opened this issue Oct 25, 2022 · 0 comments
Open

Comments

@isurulucky
Copy link

First of all, kubefwd is a really great tool!

The usecase I'm trying out with kubefwd is something similar to what is discussed at #214. I would like to run kubefwd in a pod and consume the exposed services from a different pod. The internal services are in a different cluster.

tunneling-k8s-kubefwd-discuss

The reasons for not directly running kubefwd in the same pod as the client is as follows:

  • There are scenarios where terminal access will be provided for users running the client. As kubefwd requires to be run as root, running it in a separate pod is better security wise IMO.
  • Different resource requirements (CPU, memory) of client and kubefwd
  • Possibility of using kubernetes health probes to detect if a pod restarts (as kubefwd does not straightaway get updated with the new endpoints, takes around 5 mins as discussed in Kubefwd is not properly reconnecting when new pod is created #243)

As kubefwd does not support binding to ip addresses other than loopbacks, I used a iptables query to forward traffic arriving on eth0 interface to the relevant loopback IP:

iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 9090 -j DNAT --to-destination 127.1.27.1:9090

Generically this iptables command needs to know the port bound to the k8s endpoint and the relevant loopback IP and the forwarded port. Both are same (9090) above.
Also, thinking of using k8s liveness probes with a simple telnet command to know when a portforwarded has restarted - The kubefwd pod will be restarted by K8s, and hence refreshing the endpoint configuration.
There could be multiple services port forwarded and exposed in different loopback ip addresses. Therefore, if this approach to be successful, would need to dynamically discover what is the correct ip address for each service. Atm I do not have a good way of doing it, but a potentially hacky way would be to do a grep in the modified /etc/hosts file to find the relevant IP - but I guess would have to wait till the kubefwd process does the /etc/host file modification (maybe can run kubefwd in a init container and share with the main container so that /etc/host changes are already done when the main container is started?) to do the IP extraction and iptable changes.

An alternative approach is to use a k8s ingress controller here to expose the internal services privately.

What do you all think about this workflow regarding kubefwd? I do understand this is not exactly the core usecase of kubefwd, but would greatly appreciate suggestions, ideas about drawbacks and possible pitfalls.

@isurulucky isurulucky changed the title [Discussion] Exposing the forwarded ports to a different pod [Discussion] Exposing the forwarded ports to a different pod in K8s Oct 25, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant