-
Notifications
You must be signed in to change notification settings - Fork 38.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Register the kubelet on the master node with an apiserver. #12349
Conversation
/cc @dchen1107 |
GCE e2e build/test passed for commit 21628c6f4fc82746497279b5bc26837ac9e75bdd. |
The shippable error is
which seems unrelated to this change. I've restarted shippable. |
Running a more complete e2e suite locally:
|
found=$(cat "${MINIONS_FILE}" | sed '1d' | grep -c .) || true | ||
ready=$(cat "${MINIONS_FILE}" | sed '1d' | awk '{print $NF}' | grep -c '^Ready') || true | ||
|
||
if (( ${found} == "${NUM_MINIONS}" )) && (( ${ready} == "${NUM_MINIONS}")); then | ||
if (( ${found} == "${EXPECTED_NUM_NODES}" )) && (( ${ready} == "${EXPECTED_NUM_NODES}")); then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "${found}" and "${ready}"? (pre-existing).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed and pushed squashed changes.
LGTM modulo nit |
Next shippable flake:
|
…n is separated from the apiserver running locally on the master node so that it can be optionally enabled or disabled as needed. Also, fix the healthchecking configuration for the master components, which was previously only working by coincidence: If a kubelet doesn't register with a master, it never bothers to figure out what its local address is. In which case it ends up constructing a URL like http://:8080/healthz for the http probe. This happens to work on the master because all of the pods are using host networking and explicitly binding to 127.0.0.1. Once the kubelet is registered with the master and it determines the local node address, it tries to healthcheck on an address where the pod isn't listening and the kubelet periodically restarts each master component when the liveness probe fails.
21628c6
to
8df33bc
Compare
Shippable is running again on the slightly modified changes. Third time is the charm? |
GCE e2e build/test passed for commit 8df33bc. |
"metadata": {"name":"kube-scheduler"}, | ||
"metadata": { | ||
"name":"kube-scheduler", | ||
"namespace": "kube-system" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we sure we want all master components are in the same namespaces as our addons?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure, no. But it seemed weird to have them in the default namespace, and kube-system is where we've been putting the system components. The master components are also system components, so that seemed like the logical place to move them to.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess I can live with it in kube-system for now.
LGTM. |
Register the kubelet on the master node with an apiserver.
cc/ @vishh Now we can make sure heapster has master components' stats. :-) |
Yay! Thanks for this PR! On Thu, Aug 6, 2015 at 3:22 PM, Dawn Chen notifications@github.com wrote:
|
Ok, so something in this PR broke the vagrant setup. I am guessing its the removal of kubernetes_auth, @justinsb - are you still working fine on AWS? I am not sure why that had to be removed as part of this PR, but I need more time to dig into what is actually happening here and why. |
@@ -31,19 +31,6 @@ | |||
- mode: 400 | |||
- makedirs: true | |||
|
|||
# |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was there a problem keeping /var/lib/kubelet/kubernetes_auth in addition to /var/lib/kubelet/kubeconfig?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was trying to clean it up since I thought everyone had moved off of it ages ago.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would also like to rip it out of https://github.com/GoogleCloudPlatform/kubernetes/blob/master/cmd/kubelet/app/server.go FWIW.
Is it possible to run pods on kubes master on AWS (not GCE)? or is it at least planned? |
It's possible on any kubernetes deployment. It's just a matter of which flags you pass to the kubelet running on the master. How are you creating your cluster on AWS? Are you using |
I'm using |
I don't think this flag has been plumbed through the aws startup scripts (this PR only did it for the GCE ones, since that's all I can test). |
Is there any help someone can provide (like manually testing your PRs) to make this feature available for aws users? :) |
What is/are the downside of scheduling pods on master? |
This option is separated from the apiserver running locally on the master node so that it
can be optionally enabled or disabled as needed.
Also, fix the healthchecking configuration for the master components, which
was previously only working by coincidence:
If a kubelet doesn't register with a master, it never bothers to figure out
what its local address is. In which case it ends up constructing a URL like
http://:8080/healthz for the http probe. This happens to work on the master
because all of the pods are using host networking and explicitly binding to
127.0.0.1. Once the kubelet is registered with the master and it determines
the local node address, it tries to healthcheck on an address where the pod
isn't listening and the kubelet periodically restarts each master component
when the liveness probe fails.