New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
First step to supporting multiple k8s clusters #6006
Conversation
Needs rebase |
(Didn't mean to hit close) |
f0f0c6f
to
1a48420
Compare
There's actually more to do here, because instance discovery isn't "cluster-id" aware. This is the first step though, regardless. If we'd rather wait till this all works, LMK and I can mark as WIP or just close it until it is ready. (I hadn't realized there was more work to do!) |
Thanks justinsb@. |
Sorry I missed the community hangout - I didn't see this in time. This is not trying to solve the ubernetes challenge! Much more mundane: currently on AWS, if you deploy two k8s deployments into the same region, they will interfere with each other. This is just trying to let you deploy two k8s clusters into the same region, by giving them each a tag (KubernetesCluster), and making sure that everything filters by that tag. It was straightforward to do this for the scripts, but the cloud provider for AWS still isn't aware of this tag. There is a case to be made for merging this as-is (because that will mean that people will have the tags on clusters they deploy now, even though the cloudprovider doesn't yet do the additional filtering), but there's more work to be done here for a complete solution. |
I'm changing this away from WIP, because it fixes a very bad bug where kube-down tries to delete unrelated instances. For me, it was my jenkins instance running the tests that got deleted, which was annoying, but it could have been much worse. I will do part 2 of this separately (where I pass in the KubernetesCluster into e.g. the cloud controller manager, so that it filters intelligently) |
All looks reasonable to me. I'd like brendandburns@ to cast an eye over it too, as it looks like he had some ideas around this stuff, based on the TODO comments in the code (and he's been complaining about not receiving enough code review requests - ha :-) |
Thanks - I tracked down the accidental deletion to the fact that we're matching by name. I had an instance with the same name as my k8s master (my jenkins master node was jenkins-master, and created a kubernetes cluster with the INSTANCE_PREFIX of jenkins, so my k8s master was named jenkins-master). All very obvious in hindsight, but really we need the extra filtering this PR adds. |
brendandburns@ (or erictune@ oncall) this is still waiting for a review and commit I believe. |
--filters Name=vpc-id,Values=${vpc_id} \ | ||
Name=tag:KubernetesCluster,Values=${CLUSTER_ID} \ | ||
--query Reservations[].Instances[].InstanceId) | ||
if [[ -n ${instance_ids} ]]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
quotes around "${instance_ids}"
Sorry, saw this in my email and did a drive-by review. LGTM except nits. |
@justinsb I've been following your work on AWS support. I'm hoping to run Kubernetes on AWS and it seems like there may be a few things that need to be worked out first? I'd love to help out if I can, any pointers or small areas where I might be able to help out? Sorry for commenting here, didn't know how else to get involved. |
@iterion The big remaining feature is support for ELB. That's pretty complicated, but I think we might be close now - at least to having a good path forwards. I have summarized that here: #5225 (comment) Once that merges, we should be close to feature parity to GCE, and then I'll be focusing on things like this PR, which will make everything operationally more comfortable. I think AWS head is broken right now, sadly, due to #7072. I was trying to do a pretty broad fix, but I think I should probably just do a minimal bugfix. Once that goes in, AWS should "just work" again, though without load-balancing, which right now means no realistic way to access services outside the cluster :-( If you want to try things out, feel free. You can also email me (my email is very guessable). If you wanted to take on some code, you could take over this PR, but really I think the priority should be ELB and/or exposing services outside the cluster... |
1a48420
to
9253ae6
Compare
Fixed the bash style problems - thanks & sorry for the delay. |
Sorry for my delay again, too! |
First step to supporting multiple k8s clusters
We add a KubernetesCluster tag, and then filter on it.