Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use the correct hostname in kubelet's kubeconfig #23

Conversation

anton-johansson
Copy link
Collaborator

@anton-johansson anton-johansson commented Jan 7, 2019

Closes #22

There was a mix of hostvars[item].inventory_hostname and inventory_hostname in a task that had run_once: True, causing it to give all nodes the hostname of the first node in the credentials part of kubeconfig.

Here's an example kubeconfig of my k8s-worker-02 before this fix:

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: <redacted>
    server: https://k8s-master-01:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: system:node:k8s-worker-02
  name: default
current-context: default
kind: Config
preferences: {}
users:
- name: system:node:k8s-worker-01
  user:
    client-certificate-data: <redacted>
    client-key-data: <redacted>

@amimof
Copy link
Owner

amimof commented Jan 7, 2019

The run_once statement was added because kubectl config-commands would sometimes fail due to a lock-file already being present. Kubectl only lets one process modify kubeconfigs and having multiple nodes would cause an error because it runs on localhost.

I suggest adding the kubectl role to to masters and nodes groups and let each host create their kubeconfig rather than delegating it to localhost and copying them over.

@anton-johansson
Copy link
Collaborator Author

anton-johansson commented Jan 7, 2019

Yeah, that sounds like a better idea, indeed!

Could feel a bit redundant to have kubectl on the nodes and masters though?

@amimof
Copy link
Owner

amimof commented Jan 7, 2019

That means that installing kubectl on localhost isn't necessary anymore and requires adding additional instructions in README. I'll make some changes and push to this branch to see how it looks.

@amimof
Copy link
Owner

amimof commented Jan 7, 2019

I haven't tried the latest commit, but now when we don't have with_items anymore, shouldn't {{ hostvars[item].inventory_hostname }} be replaced with {{ inventory_hostname }}?

Yes it does. I will change that :)

@amimof amimof requested review from amimof and removed request for amimof January 7, 2019 21:29
@anton-johansson anton-johansson mentioned this pull request Jan 8, 2019
@amimof
Copy link
Owner

amimof commented Jan 8, 2019

@anton-johansson Have you had the chance to test the latest commit?

@anton-johansson
Copy link
Collaborator Author

No, I can do that now.

@anton-johansson
Copy link
Collaborator Author

anton-johansson commented Jan 8, 2019

Other than my issue in #25 (which isn't really related to this PR), this works as expected. I really dig the change where we got rid of kubectl alltogether and used templates instead. Feels better!

One thing though. Is the kubectl role necessary at all anymore?

@amimof
Copy link
Owner

amimof commented Jan 8, 2019

Not really, but it's necessary on localhost until i figure out how to replace it and update the README with instructions since your default ~/.kube/config won't be ready for your new cluster after an install.

@amimof amimof merged commit a209a43 into amimof:master Jan 8, 2019
@anton-johansson
Copy link
Collaborator Author

Ah of course. But it might not be a bad idea to keep it on localhost then? Quite nice to get a ready ~/.kube/config. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

All nodes but the first one gets "unauthorized"
2 participants