Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding to running cluster #24

Closed
klausenbusk opened this issue Sep 20, 2017 · 6 comments
Closed

Adding to running cluster #24

klausenbusk opened this issue Sep 20, 2017 · 6 comments

Comments

@klausenbusk
Copy link
Contributor

Hello

I tried adding digitalocean-cloud-controller-manager last night, but it didn't work out.
The cluster was created with bootkube and the manifest is updated to match v0.6.2.

Here is the steps I did:

  1. Add --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname to the kube-apiserver.
  2. Deploy digitalocean-cloud-controller-manager
  3. Add --cloud-provider=external to kube-apiserver, kube-controller-manager and the kubelets
  4. Change --hostname-override to --hostname-override=%H
  5. Restart all the kubelet
  6. digitalocean-cloud-controller-manager isn't adding labels like region or any other labels. node.cloudprovider.kubernetes.io/uninitialized isn't removed either.

Did I miss something, or isn't it supported updating a running cluster?

The cluster ended up crashing (restarting too many kubelet at the same time, and digitalocean-cloud-controller-manager then removed the node, so the pod was evicted) . In the end I did a rollback to the latest configuration, but I tried getting digitalocean-cloud-controller-manager working for a few hours first.

@andrewsykim
Copy link
Contributor

Sorry for not getting to this sooner. Can you show me some of the logs from digitalocean-cloud-controller-manager?

@klausenbusk
Copy link
Contributor Author

Can you show me some of the logs from digitalocean-cloud-controller-manager?

Sorry, I can't. Everything got a bit "chaotic", so saving log wasn't the first priority.

I will try again on a test cluster sometime in the future, as automatic labeling is very useful.

But I can remember that I got some error, regard connecting to 169.254.169.254 (for metadata), but I solved that by changing some firewall rules.

BTW: Is the region logic even working? As I understand the code, it is pulled from the metadata service on "start". So the region label will be set to the region of the droplet where digital-cloud-controller-manager is running from == the region label won't match if you run droplets in multiple regions.

@odacremolbap
Copy link
Collaborator

odacremolbap commented Sep 26, 2017

that's right @klausenbusk

I wouldn't expect a cluster spanning multiple regions, so is the case for any other cloud provider.
Issues due to very different latencies, and the possibility of partitioning makes multi-region a better fit for kubernetes federation.

@andrewsykim
Copy link
Contributor

@klausenbusk good find! Luckily for v1.8 we're removing the dependancy to metadata as the logic is technically incorrect. #18 should address that

@klausenbusk
Copy link
Contributor Author

I wouldn't expect a cluster spanning multiple regions, so is the case for any other cloud provider.
Issues due to very different latencies, and the possibility of partitioning makes multi-region a better fit for kubernetes federation.

For a small cluster federation seems way overkill (and too much overhead). My plan is to run a master node in London, Amsterdam and Frankfurt and distribute the worker nodes even across the locations.
Ideally I can run autoscaler at some point with DO, and let that add extra nodes if a location goes down.

possibility of partitioning

I'm not really sure how big that issue is? Everything is stateless and it can't write to the database without quorum, so.. I just need a healtcheck service in every region, and that should do it.

@klausenbusk
Copy link
Contributor Author

I think this was caused by: kubernetes/kubernetes#55633 . So I would need to live without DO CCM until that issue get fixed..

Closing the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants