Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Named ports not supported #21

Open
cbeneke opened this issue May 19, 2019 · 2 comments
Open

Named ports not supported #21

cbeneke opened this issue May 19, 2019 · 2 comments

Comments

@cbeneke
Copy link

cbeneke commented May 19, 2019

Hi,

I tried to create a LoadBalancer with a named port in it

apiVersion: v1
kind: Service
metadata:
  name: nginx-http
spec:
  type: LoadBalancer
  ports:
  - name: http
    port: 80
    targetPort: http
  selector:
    app: nginx

Butn the DS then goes into crashLoop because it does not translate correctly to iptables. Logs:

Setting up forwarding from port 80 to 10.106.7.233:http/TCP
iptables v1.6.2: Port `http' not valid

Try `iptables -h' or 'iptables --help' for more information.

As kubernetes supports using named targetPorts this should imho be supported by the LB provider :)

@cbeneke
Copy link
Author

cbeneke commented May 19, 2019

This should be handled in https://github.com/kontena/akrobateo/blob/master/pkg/controller/service/service_controller.go#L232 fff, but for me it seems, that its not an easy fix. The current implementation takes one port and deploys this on all machines with the akrobateo-lb docker image via IPTables, but when using a named port the port in the backends may not be equal. Compare https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service

Perhaps more interesting is that targetPort can be a string, referring to the name of a port in the backend Pods. The actual port number assigned to that name can be different in each backend Pod.

@jnummelin
Copy link
Contributor

Yes, it's really not an easy fix at all.

To fully handle this Akrobateo should also "watch" the owning resource(s) of the pods where the selector matches and only from there map the name to a port number. This gets tricky fast as in todays Kubernetes setups there might be lot's of different operators and controllers creating pods. IMO it's not really sufficient to just check those name-port mappings from the pods alone as in any given time there might be e.g. a deployment rollout in progress and thus pods with different mappings.

So as you probably found out, the workaround is to use direct port numbers in the service.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants