New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bring up a kuberenetes cluster using coreos image as worker nodes #7445
Conversation
OS_DISTRIBUTION=${KUBE_OS_DISTRIBUTION:-debian} | ||
MASTER_IMAGE=${KUBE_GCE_MASTER_IMAGE:-container-vm-v20150317} | ||
MASTER_IMAGE_PROJECT=${KUBE_GCE_MASTER_PROJECT:-google-containers} | ||
MINION_IMAGE=${KUBE_GCE_MINION_IMAGE:-container-vm-v20150317} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: s/MINION/NODE/g
cc @bakins |
Looks reasonable to me. A comment about how to enable coreos would be useful. |
255d694
to
6f00ecf
Compare
Looks much better than the approach I was taking: creating a complete new provider. |
Fixed issue related to kube-proxy-token. Now coreos cluster (only worker nodes) are up and running:
Scheduled a pod to node (one node in my cluster):
@yifan-gu could you please have a PR to enable rkt runtime for kubelet. Once this is merged, I will disable docker and test rkt support throughout. Thanks! |
@dchen1107 Thanks for this!! I just finished the basic implementation for missing functions. I will clean up for a review, and we still need to refactor the kubelet a bit to let runtime provide syncPod interface, so that I can enable rkt. |
I guess Travis doesn't run the tests if we only change cluster turnup scripts/config? Merging since @dchen1107 says e2e passed. |
Needs rebase, actually. |
@dchen1107 Or we can skip refactor for now, and just hijack a |
@@ -646,6 +577,7 @@ function kube-up { | |||
for (( i=0; i<${#MINION_NAMES[@]}; i++)); do | |||
create-route "${MINION_NAMES[$i]}" "${MINION_IP_RANGES[$i]}" & | |||
add-instance-metadata "${MINION_NAMES[$i]}" "node-ip-range=${MINION_IP_RANGES[$i]}" & | |||
add-instance-metadata "${MINION_NAMES[$i]}" "node-name=${MINION_NAMES[$i]}" & |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This won't work, is racy if you have a lot of nodes. You need to use the multi-KV form of add-instance-metadata
that does it in one GCE command, otherwise the command may fail.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(the instance metadata update uses an opportunistic locking approach, so this just pounds the same metadata from two processes and it fails. It's easy to see from even a tiny node count.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait, where is node-name
even used? I'm not seeing this new metadata even consumed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes it is unused due to the issue you pointed out here. I removed it.
…fault for enable coreos and rocket support
Please don't merge this as is. You're moving a file that is actually pushed as part of the release. (You could try to run |
@zmerlynn If I moved configure-vm.sh back to the cluster/gce/, is the problem resolved here? configure-vm.sh is not required for coreos worker node now. We can clean it up for later. |
Yes, if it stays in place that's fine. |
helper utility library.
Ok, I did
Ran 36 of 41 Specs in 1124.775 seconds I think we can merge it now. |
Thanks @dchen1107! Merging. If anyone hits any issues due to this PR please ping me and @dchen1107 |
Bring up a kuberenetes cluster using coreos image as worker nodes
@ andyzheng0831 here is what I had so far. Hope this can help you with your project. |
@andyzheng0831 again :-) |
By default, gce provider is using ContainerVM image. Ran e2e tests against the default configuration, all tests are passed:
Ran 36 of 41 Specs in 1098.626 seconds
SUCCESS! -- 36 Passed | 0 Failed | 1 Pending | 4 Skipped I0428 01:48:18.127752 15780 driver.go:96] All tests pass
To bringing up a kubernetes cluster using coreos image with rkt installed, one can export following variables first, then call kube-up.sh
export KUBE_OS_DISTRIBUTION=coreos
export KUBE_GCE_MINION_IMAGE=coreos-stable-633-1-0-v20150414
export KUBE_GCE_MINION_PROJECT=coreos-cloud
The new cloud provider (gce-coreos) works until I rebased to the latest one. I believe the breakage introduced by kube_proxy_token which merged yesterday afternoon. Please note that works means
Next step is kubelet integrating with rkt runtime, so that we can announce the experimental support for rkt.
cc/ @bgrant0607 @vmarmol @yifan-gu