Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to get EC2 instance ID differently #3487

Closed
aceeric opened this issue Nov 16, 2023 · 6 comments
Closed

Ability to get EC2 instance ID differently #3487

aceeric opened this issue Nov 16, 2023 · 6 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-investigation

Comments

@aceeric
Copy link

aceeric commented Nov 16, 2023

Is your feature request related to a problem?
This is best described in this issue (which I already closed): #3485 - but in summary:

The EC2 instances hosting the Kubernetes cluster might not have the Node.Spec.ProviderID populated with an EC2 instance ID which the AWS LB Controller expects. As a result, the LB Controller fails to populate the Target Group.

Describe the solution you'd like
If the instance ID is not configured into the Node.Spec.ProviderID, it may be possible to determine the instance ID in a number of different ways. One way would be:

for each node
  get the internal IP address of the node
  query all EC2 instances with some tag
  for each instance
    if the instance internal IP matches the internal IP of the node
      add the instance ID to the target group

Another way would be to use a defined node label, e.g.:

service.beta.kubernetes.io/aws-load-balancer-internal-ip: n.n.n.n

Then perform the same logic described above except select the EC2 instance with an internal IP address matching the value of the node label.

Describe alternatives you've considered

I could add a step to our RKE2 provisioner that patches Node.Spec.ProviderID with the EC2 instance ID.

Summary

I would be happy to submit a PR but before beginning work on this I would like to reach alignment on the exact approach. At this time I think the following might be simplest and clearest for the person configuring the controller:

  1. Require the EC2 instances to be tagged with a specific hard-coded tag with key. This enables the controller to efficiently filter the instances. E.g. --filters "Name=tag:aws-load-balancer-cluster-name,Values=my-cluster"
  2. Annotate the Service to match. E.g.:
    apiVersion: v1
    kind: Service
    metadata:
      name: foo
      annotations:
        service.beta.kubernetes.io/aws-load-balancer-cluster-name: my-cluster
    spec:
    ...
    
  3. When to controller event fires in response to service create/update - it gets the internal IP address from each Node
  4. For each Node internal IP address - find the matching instance based on instance internal IP address to obtain the instance id
  5. Use those instance IDs to populate the Target Group attached to the NLB Listener
@aceeric aceeric changed the title Ability to get EC@ instance name differently Ability to get EC2 instance ID differently Nov 16, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 14, 2024
@shraddhabang shraddhabang added kind/feature Categorizes issue or PR as related to a new feature. triage/needs-investigation labels Feb 28, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 29, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 29, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-investigation
Projects
None yet
Development

No branches or pull requests

5 participants