New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Create a node's automatic node labels on its pods #62078

Open
josdotso opened this Issue Apr 3, 2018 · 14 comments

Comments

Projects
None yet
8 participants
@josdotso
Copy link

josdotso commented Apr 3, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:
Cannot tell Kafka broker pod which failure domain it is in for rack-awareness.

What you expected to happen:
Pods would inherit these labels from the node:
https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#interlude-built-in-node-labels

How to reproduce it (as minimally and precisely as possible):
N/A

Anything else we need to know?:
This approach would be a kind of alternative to these:

Also relevant:

Environment:

  • Kubernetes version (use kubectl version): v1.10.0
  • Cloud provider or hardware configuration: AWS
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Install tools: Kops
  • Others:
@josdotso

This comment has been minimized.

Copy link
Author

josdotso commented Apr 3, 2018

/kind feature

@josdotso

This comment has been minimized.

Copy link
Author

josdotso commented Apr 3, 2018

/sig scheduling

(?)

@josdotso

This comment has been minimized.

Copy link
Author

josdotso commented Apr 4, 2018

/sig node

@natronq

This comment has been minimized.

Copy link
Contributor

natronq commented Apr 4, 2018

Related: #61906

@discordianfish

This comment has been minimized.

Copy link
Contributor

discordianfish commented Apr 6, 2018

@josdotso Any reason you think this is better than providing this in the downward API as #40610 suggested? Feels like the better way to me. I'll try to get #40610 revived.

@josdotso

This comment has been minimized.

Copy link
Author

josdotso commented Apr 6, 2018

@discordianfish whatever way works is best for me :)

Getting access to the node labels natively works for me!

Thanks!

@solsson

This comment has been minimized.

Copy link

solsson commented Apr 8, 2018

In the meantime maybe we could collaborate on a docker image that implements this feature as init container? Preferrably using a k8s lib rather than bash. Env and/or args would tell which node labels to transfer to which labels/attributes. Init containers lend themselves nicely to composition.

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jul 7, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@discordianfish

This comment has been minimized.

Copy link
Contributor

discordianfish commented Jul 9, 2018

/remove-lifecycle stale
/lifecycle freeze

@frittentheke

This comment has been minimized.

Copy link

frittentheke commented Sep 4, 2018

Even the Kubernetes own cluster-autoscaler would benefit from a simple way to get i.e. the AWS_REGION - kubernetes/autoscaler#1208

@discordianfish

This comment has been minimized.

Copy link
Contributor

discordianfish commented Sep 4, 2018

@frittentheke Yes, that's what brought me here. Though I feel like this should be part of the downyard API, so personally would close this issue and focus on #40610 instead.

@gmaslowski

This comment has been minimized.

Copy link

gmaslowski commented Sep 21, 2018

@solsson Hi, regarding your comment I've recently stumbled upon the same issue. Here's how I've dealt with it: https://gist.github.com/gmaslowski/117f3535173d733e007d0c6c83564888

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Dec 20, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@frittentheke

This comment has been minimized.

Copy link

frittentheke commented Dec 21, 2018

/remove-lifecycle stale

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment