Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide a base port option #24

Closed
mrumpf opened this issue Nov 22, 2018 · 7 comments
Closed

Provide a base port option #24

mrumpf opened this issue Nov 22, 2018 · 7 comments
Labels
enhancement New feature or request invalid This doesn't seem right

Comments

@mrumpf
Copy link

mrumpf commented Nov 22, 2018

When I want to forward to services in multiple clusters or namespaces, which use the same ports, e.g. port 80, kubefwd reports the following error:

w.ForwardPorts Error: unable to listen on any of the requested ports: [{80 80}]
Skipping failure.

An option to specify a port offset "--baseport=30000" and map all ports to consecutive port numbers would be great to allow working with different clusters at the same time.

  • podport1 = baseport
  • podport2= baseport +1
  • podport3 = baseport +2
  • ...
@cjimti cjimti added enhancement New feature or request invalid This doesn't seem right labels Nov 22, 2018
@cjimti
Copy link
Member

cjimti commented Nov 22, 2018

There should not be a need to use different ports than the ones specified in the service. The idea is that kubefwd allows you to connect to services with the same url as you would in the cluster.

This sounds like a bug, but I need steps to reproduce. My only experience with this error "unable to listen on any of the requested ports" is when there is no pod attached to the service or the pod is not functioning properly (Crashing/etc). Are you sure your pod is accessible from the service?

@mrumpf
Copy link
Author

mrumpf commented Nov 22, 2018

When I stop the first instance and run kubefwd again, it suceeds without those messages:

sudo kubefwd services -n connectivity -c /Users/myuser/.bluemix/plugins/container-service/clusters/MyCluster/kube-config-fra02-MyCluster.yml

 _          _           __             _
| | ___   _| |__   ___ / _|_      ____| |
| |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
|   <| |_| | |_) |  __/  _|\ V  V / (_| |
|_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|

Press [Ctrl-C] to stop forwarding.
'cat /etc/hosts' to see all host entries.
Loading hosts file /etc/hosts
Original hosts backup already exists at /etc/hosts.original
No pods returned for service.
Fwd 127.1.27.1:80 as admin-server-cutover:80 to pod admin-server-cutover-deployment-594456cf94-z5hv5:8602
Fwd 127.1.27.2:80 as bar-foo-data-cutover:80 to pod bar-foo-data-cutover-deployment-85cbf68494-vqxxj:8713
Fwd 127.1.27.3:80 as foor-bar-data-cutover:80 to pod foo-bar-data-cutover-deployment-54c5bfb9f4-p4p9v:8705
Fwd 127.1.27.4:80 as registry-server-cutover:80 to pod registry-server-cutover-deployment-6cf66d4ccc-k7tvh:8600
Fwd 127.1.27.5:80 as simple-rest-domain-service-cutover:80 to pod simple-rest-domain-service-cutover-deployment-757b66b9c6-d8rf5:80
No pods returned for service.

When the other instance is running, I see the following output:

sudo kubefwd services -n connectivity -c /Users/myuser/.bluemix/plugins/container-service/clusters/MyCluster/kube-config-fra02-MyCluster.yml

 _          _           __             _
| | ___   _| |__   ___ / _|_      ____| |
| |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
|   <| |_| | |_) |  __/  _|\ V  V / (_| |
|_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|

Press [Ctrl-C] to stop forwarding.
'cat /etc/hosts' to see all host entries.
Loading hosts file /etc/hosts
Original hosts backup already exists at /etc/hosts.original
No pods returned for service.
Fwd 127.1.27.1:80 as admin-server-cutover:80 to pod admin-server-cutover-deployment-594456cf94-z5hv5:8602
Fwd 127.1.27.2:80 as bar-foo-data-cutover:80 to pod bar-foo-data-cutover-deployment-85cbf68494-vqxxj:8713
Fwd 127.1.27.3:80 as foor-bar-data-cutover:80 to pod foo-bar-data-cutover-deployment-54c5bfb9f4-p4p9v:8705
Fwd 127.1.27.4:80 as registry-server-cutover:80 to pod registry-server-cutover-deployment-6cf66d4ccc-k7tvh:8600
Fwd 127.1.27.5:80 as simple-rest-domain-service-cutover:80 to pod simple-rest-domain-service-cutover-deployment-757b66b9c6-d8rf5:80
fw.ForwardPorts Error: unable to listen on any of the requested ports: [{80 8602}]
Skipping failure.
fw.ForwardPorts Error: unable to listen on any of the requested ports: [{80 8713}]
Skipping failure.
fw.ForwardPorts Error: unable to listen on any of the requested ports: [{80 8705}]
Skipping failure.
fw.ForwardPorts Error: unable to listen on any of the requested ports: [{80 8600}]
Skipping failure.
No pods returned for service.

@cjimti
Copy link
Member

cjimti commented Nov 23, 2018

@mrumpf from your examples it looks like you are trying to run two instances of kubefwd at the same time, which is not really a use case I intended. Also, it seems like you are trying to forward the same services on the on the same namespace and cluster.

I think I need to better understand what you are trying to accomplish.

@mrumpf
Copy link
Author

mrumpf commented Nov 23, 2018

The use-case is simple. I'm working on a project where microservices are distributed over 4 different Kubernetes clusters and I need to connect to multiple Kubernetes clusters at the same time.
In the example below the namespace name in both clusters are the same.

I start kubefwd on the first cluster:

$ sudo kubefwd services -n connectivity -c /Users/user/.bluemix/plugins/container-service/clusters/Cluster1/kube-config-fra02-Cluster1.yml

 _          _           __             _
| | ___   _| |__   ___ / _|_      ____| |
| |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
|   <| |_| | |_) |  __/  _|\ V  V / (_| |
|_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|

Press [Ctrl-C] to stop forwarding.
'cat /etc/hosts' to see all host entries.
Loading hosts file /etc/hosts
Original hosts backup already exists at /etc/hosts.original
No pods returned for service.
Fwd 127.1.27.1:80 as admin-server-cutover:80 to pod admin-server-cutover-deployment-594456cf94-z5hv5:8602
Fwd 127.1.27.2:80 as ms1-cutover:80 to pod ms1-cutover-deployment-85cbf68494-vqxxj:8713
Fwd 127.1.27.3:9090 as gitblit-k8s:9090 to pod gitblit-k8s-test-5867856b7d-ptltx:9090
Fwd 127.1.27.4:80 as ms2-cutover:80 to pod ms2-cutover-deployment-54c5bfb9f4-p4p9v:8705
Fwd 127.1.27.5:80 as registry-server-cutover:80 to pod registry-server-cutover-deployment-6cf66d4ccc-cdgnt:8600
Fwd 127.1.27.6:80 as simple-rest-domain-service-cutover:80 to pod simple-rest-domain-service-cutover-deployment-757b66b9c6-d8rf5:80

Then I want to start the second kubefwd instance against another cluster:

$ sudo kubefwd services -n default -c /Users/user/.bluemix/plugins/container-service/clusters/Cluster2/kube-config-fra02-Cluster2.yml
Password:

 _          _           __             _
| | ___   _| |__   ___ / _|_      ____| |
| |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
|   <| |_| | |_) |  __/  _|\ V  V / (_| |
|_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|

Press [Ctrl-C] to stop forwarding.
'cat /etc/hosts' to see all host entries.
Loading hosts file /etc/hosts
Original hosts backup already exists at /etc/hosts.original
Fwd 127.1.27.1:80 as activemq-rabbitmq-bridge-service:80 to pod activemq-rabbitmq-bridge-54766b5fc4-trcxd:80
Fwd 127.1.27.2:80 as alb-target-cluster:80 to pod activemq-rabbitmq-bridge-54766b5fc4-trcxd:80
Fwd 127.1.27.3:80 as brawny-sheep-prometheus-alertmanager:80 to pod brawny-sheep-prometheus-alertmanager-7cf547cb49-ldjtf:9093
Fwd 127.1.27.4:80 as brawny-sheep-prometheus-kube-state-metrics:80 to pod brawny-sheep-prometheus-kube-state-metrics-796df48cc4-tb5cm:8080
Fwd 127.1.27.5:9100 as brawny-sheep-prometheus-node-exporter:9100 to pod brawny-sheep-prometheus-node-exporter-2h4wb:9100
fw.ForwardPorts Error: unable to listen on any of the requested ports: [{80 80}]
Skipping failure.
fw.ForwardPorts Error: unable to listen on any of the requested ports: [{80 80}]
Skipping failure.
fw.ForwardPorts Error: unable to listen on any of the requested ports: [{80 8080}]
Skipping failure.
Fwd 127.1.27.6:9091 as brawny-sheep-prometheus-pushgateway:9091 to pod brawny-sheep-prometheus-pushgateway-7ccddd64c9-f6qb9:9091
Fwd 127.1.27.7:80 as brawny-sheep-prometheus-server:80 to pod brawny-sheep-prometheus-server-7b94b4755c-vpk8g:9090
Fwd 127.1.27.8:80 as exciting-serval-grafana:80 to pod exciting-serval-grafana-6b57cffc4d-h2v8s:3000
Fwd 127.1.27.9:80 as forwarding-proxy-gen4-service-internal:80 to pod forwarding-proxy-gen4-haproxy-cfbc6ff9f-grpzb:80
Fwd 127.1.27.9:81 as forwarding-proxy-gen4-service-internal:81 to pod forwarding-proxy-gen4-haproxy-cfbc6ff9f-grpzb:81
Fwd 127.1.27.9:9117 as forwarding-proxy-gen4-service-internal:9117 to pod forwarding-proxy-gen4-haproxy-cfbc6ff9f-grpzb:9117
Fwd 127.1.27.10:443 as gen3-entry-point:443 to pod activemq-rabbitmq-bridge-54766b5fc4-trcxd:443
Fwd 127.1.27.11:443 as kubernetes:443 to pod activemq-rabbitmq-bridge-54766b5fc4-trcxd:32762
Fwd 127.1.27.12:443 as reverse-proxy-gen4-service:443 to pod reverse-proxy-gen4-haproxy-5d6c85c668-qkzg5:443
Fwd 127.1.27.13:81 as reverse-proxy-gen4-service-internal:81 to pod reverse-proxy-gen4-haproxy-5d6c85c668-qkzg5:81
Fwd 127.1.27.13:9117 as reverse-proxy-gen4-service-internal:9117 to pod reverse-proxy-gen4-haproxy-5d6c85c668-qkzg5:9117

This one fails, because it tries to open port 80 for the second time in the IP range 127.1.27.x (BTW, it would be great to show on which IP the port cannot be opened in the message "fw.ForwardPorts Error: unable to listen on any of the requested ports: [{80 80}]")

One idea is to use a different localhost network IP range for each instance?

Instead of trying to determine whether there is another instance running, a property

--localhostNet=127.1.27.0/24

would be great to tell kubefwd just to use other IPs for the second instance

--localhostNet=127.2.27.0/24

The modification of the the /etc/hosts file should not be affected by multiple instances.

@cjimti
Copy link
Member

cjimti commented Nov 24, 2018

It would be easier for me to allow kubefwd to use multiple contexts, it would not work well to have multiple instances running since there could easily be race conditions when managing IPs and hostnames. kubefwd does increment the IP addresses now and it works well with multiple namespaces. I see too many problems trying to support multiple instances running and not conflicting with each other.

If multiple clusters are needed then it would probably need to work like the multiple namespaces, the first cluster would work as it does now, then additional clusters could only be accessed through
SERVICE.NAMESPACE.svc.CLUSTER.external or something like, because I can't assume the service names would be unique to the cluster or namespace. I am new to federation so I need to read up.

@mrumpf
Copy link
Author

mrumpf commented Nov 24, 2018

You are right, passing multiple contexts would allow you to avoid any naming or port conflicts. And yes, something like <service>.<namespace>.<cluster> would be the pattern to avoid the naming conflicts. The referenced context files contain the cluster name and the selected namespace.
A short name (without . could be added to /etc/hosts when there is no conflict.

So the /etc/hosts could look like this (in my case from above):

127.1.27.1   ms1 ms1.nsA ms1.nsA.cluster1
127.1.27.2.  ms2.nsB ms2.nsB.cluster1 # no shortname because of naming conflict between microservices in 2 different namespaces
127.1.27.3   ms3.nsC.cluster1 # no shortname because of conflict between same namespaces and different clusters
...

That means, connecting to a single cluster would allow the same naming pattern as today, so no change for existing users. When you specify multiple contexts, the naming rules are different to avoid conflicts.

@cjimti
Copy link
Member

cjimti commented Nov 28, 2018

I am closing this issue and referring to it in a new enhancement issue #28 for supporting multiple clusters.

@cjimti cjimti closed this as completed Nov 28, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request invalid This doesn't seem right
Projects
None yet
Development

No branches or pull requests

2 participants