Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Listener gets deleted on NLB #2844

Closed
DanielMcAssey opened this issue Oct 21, 2022 · 14 comments
Closed

Listener gets deleted on NLB #2844

DanielMcAssey opened this issue Oct 21, 2022 · 14 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@DanielMcAssey
Copy link

Describe the bug
We deploy 2 services with different NodePort's and after deploying the 2nd service, the first services Listener is deleted from the NLB.
They are 2 different UDP ports targeting different nodes.

Steps to reproduce
Create a service with UDP port 5000 on Node 1
Create a service with UDP port 5001 on Node 2
Apply both
Only 2nd service listener is added

Expected outcome
I should have 2 listeners pointing to different target groups

Environment

  • AWS Load Balancer controller version 2.4.4
  • Kubernetes version 1.23
  • Using EKS (yes/no), if so version? Yes 1.23

Additional Context:
The target group is create for both, but the 2nd service removes the first services listener, leaving the target group hanging

@kishorj
Copy link
Collaborator

kishorj commented Oct 24, 2022

@DanielMcAssey, what does your service spec look like?

@DanielMcAssey
Copy link
Author

DanielMcAssey commented Oct 25, 2022

Sure, here are both services:

apiVersion: v1
kind: Service
metadata:
  namespace: scalable-cluster
  labels:
    service: service-0
  name: service-0
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-name: "primary-nlb"
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    service.beta.kubernetes.io/aws-load-balancer-attributes: "deletion_protection.enabled=true,load_balancing.cross_zone.enabled=true"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-XXXXXXXXXXXXXX,eipalloc-XXXXXXXXXXXX"
spec:
  type: NodePort
  externalTrafficPolicy: Local
  ports:
    - protocol: UDP
      port: 50000
      targetPort: 50000
      nodePort: 50000
  selector:
    k8s-app: pod-0
----------
apiVersion: v1
kind: Service
metadata:
  namespace: scalable-cluster
  labels:
    service: service-1
  name: service-1
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-name: "primary-nlb"
    service.beta.kubernetes.io/aws-load-balancer-type: nlb
    service.beta.kubernetes.io/aws-load-balancer-attributes: "deletion_protection.enabled=true,load_balancing.cross_zone.enabled=true"
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "eipalloc-XXXXXXXXXXXXXX,eipalloc-XXXXXXXXXXXX"
spec:
  type: NodePort
  externalTrafficPolicy: Local
  ports:
    - protocol: UDP
      port: 50001
      targetPort: 50001
      nodePort: 50001
  selector:
    k8s-app: pod-1

@M00nF1sh
Copy link
Collaborator

M00nF1sh commented Oct 26, 2022

@DanielMcAssey
You used the same NLB name for both service: 'primary-nlb'. Currently we don't support reuse the same NLB for two different service.
You have to either a single Service with two port in service spec instead of two service or have two NLB.

We can improve our code to error out if we detects a NLB was used by multiple Services.

@DanielMcAssey
Copy link
Author

Ah, any reason why? As AWS NLB supports this use case.
Is there scope to change it? Would you accept a PR?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 24, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 23, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 26, 2023
@DanielMcAssey
Copy link
Author

/reopen

Just want to address this again if possible, would a PR be accepted to change this behaviour? As AWS NLB supports this use case.

@k8s-ci-robot k8s-ci-robot reopened this Jul 16, 2023
@k8s-ci-robot
Copy link
Contributor

@DanielMcAssey: Reopened this issue.

In response to this:

/reopen

Just want to address this again if possible, would a PR be accepted to change this behaviour? As AWS NLB supports this use case.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@DanielMcAssey
Copy link
Author

Is this technically the same as #228 and #3247 ?

@shiyuhang0
Copy link

shiyuhang0 commented Aug 1, 2023

I think is the same as #3247.
I met the same issue as yours. I create the targetgroup by myself and then add a listener to an existing nlb which is created by the controller. Then I find the listener will be deleted.

Is anyone working on it?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 20, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants