Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubecfg overwritten in kops 1.19, without user flags specified #11021

Closed
MMeent opened this issue Mar 12, 2021 · 12 comments
Closed

Kubecfg overwritten in kops 1.19, without user flags specified #11021

MMeent opened this issue Mar 12, 2021 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@MMeent
Copy link
Contributor

MMeent commented Mar 12, 2021

1. What kops version are you running? The command kops version, will display
this information.

1.19.0

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

1.19

3. What cloud provider are you using?
AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

kops update cluster --yes

5. What happened after the commands executed?

I0312 10:56:00.863174    1139 executor.go:111] Tasks: 0 done / 103 total; 54 can run
I0312 10:56:02.434043    1139 executor.go:111] Tasks: 54 done / 103 total; 17 can run
I0312 10:56:03.562220    1139 executor.go:111] Tasks: 71 done / 103 total; 26 can run
I0312 10:56:05.477061    1139 executor.go:111] Tasks: 97 done / 103 total; 6 can run
I0312 10:56:07.361688    1139 executor.go:111] Tasks: 103 done / 103 total; 0 can run
I0312 10:56:07.361751    1139 dns.go:156] Pre-creating DNS records
I0312 10:56:08.207608    1139 update_cluster.go:313] Exporting kubecfg for cluster
W0312 10:56:08.746877    1139 update_cluster.go:337] Exported kubecfg with no user authentication; use --admin, --user or --auth-plugin flags with `kops export kubecfg`

6. What did you expect to happen?

The kubecfg shouldn't have been updated. E.g. not

I0312 10:56:08.207608    1139 update_cluster.go:313] Exporting kubecfg for cluster
W0312 10:56:08.746877    1139 update_cluster.go:337] Exported kubecfg with no user authentication; use --admin, --user or --auth-plugin flags with `kops export kubecfg`

**7. Please provide your cluster manifest.
not applicable

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

9. Anything else do we need to know?

I'm manually setting the server of the kubecfg to the internal name (api.internal.cluster-name). The API is only accessable for internal IPs, so defaulting to the public ips is annoying and this configuration being overwritten each time (whilst release notes say that wouldn't happen anymore unless specifically asked) is also a chore to revert time and time again.

@MMeent
Copy link
Contributor Author

MMeent commented Mar 12, 2021

specifically, I'm annoyed that 1.19 still updates the kubecfg, even when the release notes explicitly call out that that doesn't happen anymore by default ("kOps will no longer automatically export the kubernetes config on kops update cluster")

@hakman
Copy link
Member

hakman commented Mar 18, 2021

Ref: #9990 #10105
/cc @justinsb

@michaelajr
Copy link

michaelajr commented Mar 23, 2021

I confirmed this just now as well. Seems to be a result of --create-kube-config defaulting to true. It will result in a user-less config being written. Can set to false to get the behavior outlined in the docs where it should not touch the existing config.

@hakman
Copy link
Member

hakman commented Mar 23, 2021

Thanks for the patience. We discussed about this and @justinsb will look into it soon.

@justinsb
Copy link
Member

Sorry about the behaviour here - we're trying to balance setting the current context so that you don't have to specify it every time (the UX) while also being more secure about not always exporting admin credentials.

There is a flag --internal BTW which should export the cluster's internal name - does that option mean you don't have to edit the kubeconfig file manually @MMeent?

I'm am looking into this; I can certainly clarify the docs to specify that it's the user config that we won't overwrite or export by default. I'm worried that if we don't overwrite the server config that it will cause a different class of problems for users when their configuration changes and they forget to export.

In addition, I'm looking to clean up the code, but I don't think that it's particularly easy to change the behaviour here.

My 2c is that we should work towards making it that you don't have to edit the kubeconfig - i.e. exporting the internal API address by default (so it would be good to know if the --internal flag works) and also to a secure configuration using the auth plugin, so we can once again configure admin credentials by default.

justinsb added a commit to justinsb/kops that referenced this issue Mar 27, 2021
Make a clearer distinction between exporting kubeconfig (including
server endpoints / certificates) vs exporting credentials.

Issue kubernetes#11021
@justinsb
Copy link
Member

justinsb commented Apr 4, 2021

I was curious if maybe we had a bug with --internal; I didn't observe one. It might still not match user expectation (?), but it wasn't self-evidently broken:

> kops create cluster foo.example.com --zones us-east-2a
> kops update cluster foo.example.com --yes --internal

> kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://api.internal.foo.example.com
  name: foo.example.com
contexts:
- context:
    cluster: foo.example.com
    user: foo.example.com
  name: foo.example.com
current-context: foo.example.com
kind: Config
preferences: {}
users:
- name: foo.example.com
  user: {}

I checked and it seemed to work with kops export kubecfg --internal also.

@hakman hakman removed the blocks-next label Apr 6, 2021
@MMeent
Copy link
Contributor Author

MMeent commented Apr 8, 2021

Hmm, yes, using --internal works. I didn't know that this option existed in this manner. I probably noticed it, but thought that it would be a version of the --internal flag of create cluster or something like that.

But, regardless, the behaviour of overwriting existing cluster configurations is still unexpected, and I'd like it if it didn't do that unless explicitly asked to do that.

@justinsb
Copy link
Member

justinsb commented Apr 9, 2021

@MMeent thanks for confirming. We do want to export the kubecfg when we're first creating the cluster; we do also want to export it if the endpoint has changed (e.g. if the cluster switches from DNS to a load balancer, although I'm not sure this is actually something that can be done!).

We do have the --create-kube-config=false flag. (And it's a bit of a hack, but you could always set KUBECONFIG=/tmp/stop-writing-my-real-kubeconfig).

One thing I'd like us to do more of is use our kops configuration file (~/.kops/config); currently that's limited to basically just configuring kops_state_store, but we could make create_kube_config configurable there. We'd probably also have to have different options for create vs update, but that seems doable...

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 8, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 7, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

7 participants