Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

handle headless service of statefulset pods #52

Closed
rjanovski opened this issue Mar 31, 2019 · 7 comments · Fixed by #55
Closed

handle headless service of statefulset pods #52

rjanovski opened this issue Mar 31, 2019 · 7 comments · Fixed by #55
Assignees
Labels
bug Something isn't working enhancement New feature or request

Comments

@rjanovski
Copy link

Hey guys, thanks for this cool tool!

I have an issue where headless service exposing a statefulset is not getting mapped well.
it would be great if you can provide the special case handling for such scenario (as it seems that everyone is struggling with this one).

➜  ~ sudo kubefwd svc -l app=kafka
2019/03/31 17:51:04  _          _           __             _
2019/03/31 17:51:04 | | ___   _| |__   ___ / _|_      ____| |
2019/03/31 17:51:04 | |/ / | | | '_ \ / _ \ |_\ \ /\ / / _  |
2019/03/31 17:51:04 |   <| |_| | |_) |  __/  _|\ V  V / (_| |
2019/03/31 17:51:04 |_|\_\\__,_|_.__/ \___|_|   \_/\_/ \__,_|
2019/03/31 17:51:04
2019/03/31 17:51:04 Version 1.8.0
2019/03/31 17:51:04 https://github.com/txn2/kubefwd
2019/03/31 17:51:04
2019/03/31 17:51:04 Press [Ctrl-C] to stop forwarding.
2019/03/31 17:51:04 'cat /etc/hosts' to see all host entries.
2019/03/31 17:51:04 Loaded hosts file /etc/hosts
2019/03/31 17:51:04 Hostfile management: Original hosts backup already exists at /Users/ram/hosts.original
2019/03/31 17:51:07 Forwarding: kafka-main-headless:9092 to pod kafka-main-0:9092

results in:

127.1.27.2  kafka-main-headless kafka-main-headless.default kafka-main-headless.default.svc.cluster.local

but since there are 3 pods, what's really needed is:

127.1.27.2  kafka-main-0.kafka-main-headless.default
127.1.27.3  kafka-main-1.kafka-main-headless.default
127.1.27.4  kafka-main-2.kafka-main-headless.default

pods:

kafka-main-0  2/2  Running   0  12d   100.123.155.228   ip-10-20-64-220.ec2.internal   <none>
kafka-main-1  2/2  Running   0  12d   100.113.247.159   ip-10-20-65-209.ec2.internal   <none>
kafka-main-2  2/2  Running   0  12d   100.124.146.8     ip-10-20-71-138.ec2.internal   <none>

the issue is that there are pod-names under the headless service, and these don't get mapped.
it seems not too hard to follow the labels and get the right pods (name and IP) and map correctly.

even if just kubefwd pod was available there was something to work with.

@cjimti cjimti self-assigned this Apr 1, 2019
@cjimti
Copy link
Member

cjimti commented Apr 1, 2019

@rjanovski I agree. I should be able to take a look at this in the new few days. It's bitten me a few times.

@cjimti cjimti added bug Something isn't working enhancement New feature or request labels Apr 1, 2019
@cjimti
Copy link
Member

cjimti commented Apr 1, 2019

@rjanovski can you post your service object here so I can make sure I know what you mean by following the labels (as opposed to selectors?) I want to make sure I can account for your situation and and not make assumptions. thanks

I would assume the best solutions would be kubefwd pods since your technically not accessing through a service.

Also, I will need to ensure I respect subdomain if that is specified.

@rjanovski
Copy link
Author

@cjimti I meant the selector labels, yes. but just querying the endpoints of the service seem to be enough. (and should work in any namespace/subdomain, I'm using default)

BTW, I'm just using the kafka helm chart: https://github.com/bitnami/charts/tree/master/bitnami/kafka so replicating my scenario is quite simple (helm install bitnami/kafka)

headless service:
➜ kubectl get svc kafka-main-headless -o yaml

apiVersion: v1
kind: Service
metadata:
  creationTimestamp: "2019-03-17T10:56:25Z"
  labels:
    app: kafka
    chart: kafka-1.5.0
    heritage: Tiller
    release: kafka-main
  name: kafka-main-headless
  namespace: default
  resourceVersion: "14923563"
  selfLink: /api/v1/namespaces/default/services/kafka-main-headless
  uid: 4f419614-48a3-11e9-afb6-0e197f96751c
spec:
  clusterIP: None
  ports:
  - name: kafka
    port: 9092
    protocol: TCP
    targetPort: kafka
  selector:
    app: kafka
    release: kafka-main
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}

and the endoints (which have hostname to prefix the headless-service):
kubectl get endpoints kafka-main-headless -o yaml

apiVersion: v1
kind: Endpoints
metadata:
  creationTimestamp: "2019-03-17T10:56:25Z"
  labels:
    app: kafka
    chart: kafka-1.5.0
    heritage: Tiller
    release: kafka-main
  name: kafka-main-headless
  namespace: default
  resourceVersion: "15481206"
  selfLink: /api/v1/namespaces/default/endpoints/kafka-main-headless
  uid: 4f42370a-48a3-11e9-afb6-0e197f96751c
subsets:
- addresses:
  - hostname: kafka-main-1
    ip: 100.113.247.159
    nodeName: ip-10-20-65-209.ec2.internal
    targetRef:
      kind: Pod
      name: kafka-main-1
      namespace: default
      resourceVersion: "15481019"
      uid: 76eb7e2d-4a26-11e9-afb6-0e197f96751c
  - hostname: kafka-main-0
    ip: 100.123.155.228
    nodeName: ip-10-20-64-220.ec2.internal
    targetRef:
      kind: Pod
      name: kafka-main-0
      namespace: default
      resourceVersion: "15481205"
      uid: 903ea34c-4a26-11e9-afb6-0e197f96751c
  - hostname: kafka-main-2
    ip: 100.124.146.8
    nodeName: ip-10-20-71-138.ec2.internal
    targetRef:
      kind: Pod
      name: kafka-main-2
      namespace: default
      resourceVersion: "15480861"
      uid: 5dc45d0d-4a26-11e9-afb6-0e197f96751c
  ports:
  - name: kafka
    port: 9092
    protocol: TCP

@cjimti
Copy link
Member

cjimti commented Apr 8, 2019

@rjanovski please try 1.8.2 when you get a chance. @alloran contributed an update to resolve this. Thanks

@rjanovski
Copy link
Author

Works great now, thanks!

@bcouetil
Copy link

bcouetil commented Aug 19, 2020

Somewhere between 1.8.2 and 1.14.0 has been a regression, this is not working anymore 😢

EDIT: 1.13.2 is the culprit, 1.13.1 is the last working version.

@bcouetil
Copy link

See #133

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants