Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is Kubernetes cluster name available to chart templates? #2055

Closed
mcandre opened this issue Mar 2, 2017 · 37 comments
Closed

Is Kubernetes cluster name available to chart templates? #2055

mcandre opened this issue Mar 2, 2017 · 37 comments

Comments

@mcandre
Copy link

mcandre commented Mar 2, 2017

Is there a way for a chart to become aware of the Kubernetes cluster name, for tagging purposes?

@technosophos
Copy link
Member

Good idea. I'm trying to find where we can get this from the Kubernetes API.

@technosophos technosophos added this to the 2.4.0-Triage milestone Mar 3, 2017
@thomastaylor312 thomastaylor312 added feature help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Apr 15, 2017
@tback
Copy link
Contributor

tback commented Apr 26, 2017

Opened a feature request in kubernetes: kubernetes/kubernetes#44954

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@naphthalene
Copy link

/remove-lifecycle rotten

This is something that's really nice to have when doing hostname based ingress + external-dns with a single Route53 zone and <svc>.<namespace>.<cluster>.domain.com.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@ivandov
Copy link

ivandov commented May 24, 2018

Any updates or workarounds here?

@bacongobbler
Copy link
Member

bacongobbler commented May 24, 2018

This issue was labeled as a wontfix on kubernetes' side, so there will not be a way to identify the name of the kubernetes cluster through the API.

However, perhaps there may be a way to fetch the name of the current context we are using from ~/.kube/config to connect to the cluster and supply that to the template engine. It's not the same (since everyone can have different names for the same cluster in their own ~/.kube/config) but is that "close enough" to what users are asking for?

@bacongobbler
Copy link
Member

a similar ticket with some background context: #2613

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@nikhita
Copy link

nikhita commented Aug 27, 2018

/remove-lifecycle rotten

1 similar comment
@nikhita
Copy link

nikhita commented Aug 27, 2018

/remove-lifecycle rotten

@bacongobbler
Copy link
Member

we've moved out of the kubernetes org so the stale bot is no longer active. :)

@nikhita
Copy link

nikhita commented Aug 27, 2018

we've moved out of the kubernetes org so the stale bot is no longer active. :)

@bacongobbler Haha, I figured. :)

Btw a quick GitHub search shows that there are many issues with lifecycle/* labels. I think it would helpful if the rotten or stale labels were removed, just to avoid confusion. :)

@odinsy
Copy link

odinsy commented Nov 7, 2019

There is one way with yq itlity

kubectl -n kube-system get configmap kubeadm-config -o jsonpath={.data.ClusterConfiguration} | yq r - clusterName

@bacongobbler
Copy link
Member

bacongobbler commented Nov 7, 2019

I guess that's only possible with clusters spawned with kubeadm, right? That command looks at the cluster name provided when kubeadm created the cluster.

In any case, that should be helpful for a small subset of users. Nice trick. 👍

@sgandon
Copy link

sgandon commented Nov 14, 2019

The coredns config map seems to be holding the value

>k get cm coredns -n kube-system -oyaml
apiVersion: v1
data:
  Corefile: |
    .:53 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
          pods insecure
          upstream
          fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        proxy . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
kind: ConfigMap
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"v1","data":{"Corefile":".:53 {\n    errors\n    health\n    kubernetes cluster.local in-addr.arpa ip6.arpa {\n      pods insecure\n      upstream\n      fallthrough in-addr.arpa ip6.arpa\n    }\n    prometheus :9153\n    proxy . /etc/resolv.conf\n    cache 30\n    loop\n    reload\n    loadbalance\n}\n"},"kind":"ConfigMap","metadata":{"annotations":{},"labels":{"eks.amazonaws.com/component":"coredns","k8s-app":"kube-dns"},"name":"coredns","namespace":"kube-system"}}
  creationTimestamp: "2019-06-14T14:03:02Z"
  labels:
    eks.amazonaws.com/component: coredns
    k8s-app: kube-dns
  name: coredns
  namespace: kube-system
  resourceVersion: "47"
  selfLink: /api/v1/namespaces/kube-system/configmaps/coredns

@josh9191
Copy link

Any updates on the issue?

@katsew
Copy link

katsew commented Mar 19, 2020

Any updates on this issue?

I made the feature request, but this issue is discussing the same thing, so my issue was closed.

My idea is as the same as this comment

@bacongobbler bacongobbler removed this from the Upcoming - Minor milestone Aug 14, 2020
@github-actions
Copy link

This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.

@tetchel
Copy link

tetchel commented Apr 15, 2021

This was closed as stale, but the community still wants it. It was even added to a milestone.

Can it be re-opened?

I'd love to be able to reference the cluster domain in my ingress host rather than needing the user to pass it as a value.

@jmuleiro
Copy link

Seconded. My use case is getting the cluster name for a Job that creates objects on buckets to use as path.

@dwerder
Copy link

dwerder commented Nov 7, 2021

It would be nice if this can be implemented

@MariaPaypoint
Copy link

We are waiting for this

@bacongobbler
Copy link
Member

bacongobbler commented Dec 1, 2021

Repeating what I mentioned earlier...

This issue was labeled as a wontfix on Kubernetes' side. To answer the original question:

Is Kubernetes cluster name available to chart templates?

No, because Kubernetes has no concept of a "cluster name". We cannot support something that does not exist.

Your best bet would be to provide a custom clusterName value in your templates, which you can choose to override by passing kubectl config current-context as input either via --set or --values.

Helm cannot provide the current context's name as part of helm install or helm upgrade's built-in objects because one user could set their current context with the name "myCluster" and another user could use "default", which means upgrades and installs are nondeterministic from one upgrade to the next. This also does not work with users who use an in-cluster service account to authenticate with the Kubernetes cluster because there's no concept of a "current context".

If you can convince the Kubernetes authors to implement a "cluster ID" or a "cluster name" that can uniquely identify a cluster, we'd accept PRs to add that to Helm's built-in objects.

@GounGG
Copy link

GounGG commented Jun 22, 2022

I believe that everyone performs update operations on a similar publishing machine. This .kube/config will configure a lot of cluster information. I need to get the name of the context I am currently using to judge whether I need to do differential deployment. This should belong to a very common requirement.

@jeffWelling
Copy link

Posting this in case it saves other folks some time.

One solution that I've seen work is, if you're using terraform to create the cluster, then use terraform to create a configmap containing the name of the cluster, and then read the configmap value from helm.

@imperialguy
Copy link

How about cluster ID? Can I read cluster ID using helm and pass it to (say) a pod?

@emmeowzing
Copy link

I want this feature, also.

@joejulian
Copy link
Contributor

If you really want to have some unique piece of info, use lookup and grab the uuid of the default namespace. This is the closest thing you're going to have to finding a unique id for a kubernetes cluster.

@joejulian joejulian added wont fix and removed help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. feature labels May 10, 2023
@maximveksler
Copy link

If you really want to have some unique piece of info, use lookup and grab the uuid of the default namespace. This is the closest thing you're going to have to finding a unique id for a kubernetes cluster.

That's an interesting workaround. ty.

@joeloplot
Copy link

joeloplot commented Jul 10, 2023

kubectl config current-context | cut -f2 -d/ is the name of the cluster, surely?
Whether or not it's unique, I mean.

@joeloplot
Copy link

joeloplot commented Jul 10, 2023 via email

@tback
Copy link
Contributor

tback commented Jul 10, 2023

That depends on your use case.
Relying on kubeconfig make the code dependant on which host it is executed on. This is a pattern that many people try to avoid.

@joeloplot
Copy link

joeloplot commented Jul 10, 2023 via email

@pkazi
Copy link

pkazi commented Apr 22, 2024

For AWS EKS Specific, we can get it via user-data, Run below from pod with ec2 instance metadata access available -

curl -s http://169.254.169.254/latest/user-data | grep /etc/eks/bootstrap.sh | awk '{print $2}'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests