Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kops toolbox template: ignores config file for clusterName in templates #5454

Closed
ms4720 opened this issue Jul 18, 2018 · 19 comments
Closed

kops toolbox template: ignores config file for clusterName in templates #5454

ms4720 opened this issue Jul 18, 2018 · 19 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ms4720
Copy link
Contributor

ms4720 commented Jul 18, 2018

------------- BUG REPORT TEMPLATE --------------------

  1. What kops version are you running? The command kops version, will display
    this information.

Version 1.10.0-alpha.1

  1. What Kubernetes version are you running? kubectl version will print the
    version if a cluster is running or provide the Kubernetes version specified as
    a kops flag.
    v1.10.5

  2. What cloud provider are you using?
    AWS

  3. What commands did you run? What is the simplest way to reproduce this issue?
    kops toolbox template --values=test.yml --template=template.yml --format-yaml

test.yml 
-----------
clusterName: bob
template.yml
---------
 kops.k8s.io/cluster: {{ .clusterName }}
  1. What happened after the commands executed?
Using cluster from kubectl context: kube.XXXXX.com

kops.k8s.io/cluster: kube.XXXXXX.com

with no ~/.kube/config

kops toolbox template --values=config/test.yml --template=instance_groups/sample.yml --format-yaml
W0718 18:37:54.026010   44048 root.go:248] no context set in kubecfg
kops.k8s.io/cluster: null

I did not have a kube config file with multple clusters in it so I can not test for that behavior

  1. What did you expect to happen?
kops.k8s.io/cluster: bob

Template generation should use the config file, pulling things out of kube config can lead to nasty suprises

  1. Please provide your cluster manifest. Execute
    kops get --name my.example.com -o yaml to display your cluster manifest.
    You may want to remove your cluster name and other sensitive information.

N/A

  1. Please run the commands with most verbose logging by adding the -v 10 flag.
    Paste the logs into this report, or in a gist and provide the gist link here.

N/A

  1. Anything else do we need to know?

Workaround is just uses a different string for key

@tomdottom
Copy link

Possible duplicate of #5015

@ms4720
Copy link
Contributor Author

ms4720 commented Jul 20, 2018

similar, If I remember correctly it died with no ~/.kube/conf file. Related issue: are there any other magic variables in the code base? when making templates I really do not want it reading ~/.kube/conf for anything.

@ms4720
Copy link
Contributor Author

ms4720 commented Jul 22, 2018

I wonder if there are any other magic variables floating around.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 20, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 19, 2018
@ms4720
Copy link
Contributor Author

ms4720 commented Nov 20, 2018

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 20, 2018
@rifelpet
Copy link
Member

rifelpet commented Jan 2, 2019

It looks like this is the line in question:

context["clusterName"] = options.clusterName

If this was guarded with a conditional that checked whether it was already defined in the context, then you should be able to define it in a value and kops would no longer rely on the kubeconfig file.

@ms4720
Copy link
Contributor Author

ms4720 commented Jan 2, 2019

@rifelpet thanks for looking into it
I just might get a micro commit out of this

@grebois
Copy link

grebois commented Feb 27, 2019

same issue with Version 1.11.0 (git-2c2042465)

@2tim
Copy link

2tim commented Mar 21, 2019

Any update on this? It seems to render the template function unusable without a kubecfg. I was hoping to use this feature in a CI pipeline.

@ms4720
Copy link
Contributor Author

ms4720 commented Mar 22, 2019

@2tim I do not think you need a kubeconfig, you just need to have a config varable not called "clusterName" ( "myClusterName" works fine) that you call from the template to insert the cluster name in the yaml file. kops will still put out a message though, if I remember correctly.

@2tim
Copy link

2tim commented Mar 22, 2019 via email

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 20, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 20, 2019
@lcrisci
Copy link
Contributor

lcrisci commented Jul 24, 2019

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 24, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 22, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 21, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants