Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setup fluentd-ds-ready label in startup script not in kubelet #42931

Merged
merged 1 commit into from Mar 16, 2017
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
7 changes: 7 additions & 0 deletions cluster/gce/config-default.sh
Expand Up @@ -106,6 +106,13 @@ ENABLE_L7_LOADBALANCING="${KUBE_ENABLE_L7_LOADBALANCING:-glbc}"
# standalone - Heapster only. Metrics available via Heapster REST API.
ENABLE_CLUSTER_MONITORING="${KUBE_ENABLE_CLUSTER_MONITORING:-influxdb}"

# Historically fluentd was a manifest pod and then was migrated to DaemonSet.
# To avoid situation during cluster upgrade when there are two instances
# of fluentd running on a node, kubelet need to mark node on which
# fluentd is not running as a manifest pod with appropriate label.
# TODO(piosz): remove this in 1.8
NODE_LABELS="${KUBE_NODE_LABELS:-alpha.kubernetes.io/fluentd-ds-ready=true}"

# Optional: Enable node logging.
ENABLE_NODE_LOGGING="${KUBE_ENABLE_NODE_LOGGING:-true}"
LOGGING_DESTINATION="${KUBE_LOGGING_DESTINATION:-gcp}" # options: elasticsearch, gcp
Expand Down
7 changes: 7 additions & 0 deletions cluster/gce/config-test.sh
Expand Up @@ -137,6 +137,13 @@ CONTROLLER_MANAGER_TEST_ARGS="${CONTROLLER_MANAGER_TEST_ARGS:-} ${TEST_CLUSTER_R
SCHEDULER_TEST_ARGS="${SCHEDULER_TEST_ARGS:-} ${TEST_CLUSTER_API_CONTENT_TYPE}"
KUBEPROXY_TEST_ARGS="${KUBEPROXY_TEST_ARGS:-} ${TEST_CLUSTER_API_CONTENT_TYPE}"

# Historically fluentd was a manifest pod and then was migrated to DaemonSet.
# To avoid situation during cluster upgrade when there are two instances
# of fluentd running on a node, kubelet need to mark node on which
# fluentd is not running as a manifest pod with appropriate label.
# TODO(piosz): remove this in 1.8
NODE_LABELS="${KUBE_NODE_LABELS:-alpha.kubernetes.io/fluentd-ds-ready=true}"

# Optional: Enable node logging.
ENABLE_NODE_LOGGING="${KUBE_ENABLE_NODE_LOGGING:-true}"
LOGGING_DESTINATION="${KUBE_LOGGING_DESTINATION:-gcp}" # options: elasticsearch, gcp
Expand Down
7 changes: 3 additions & 4 deletions pkg/kubelet/kubelet_node_status.go
Expand Up @@ -192,10 +192,9 @@ func (kl *Kubelet) initialNode() (*v1.Node, error) {
ObjectMeta: metav1.ObjectMeta{
Name: string(kl.nodeName),
Labels: map[string]string{
metav1.LabelHostname: kl.hostname,
metav1.LabelOS: goruntime.GOOS,
metav1.LabelArch: goruntime.GOARCH,
metav1.LabelFluentdDsReady: "true",
metav1.LabelHostname: kl.hostname,
metav1.LabelOS: goruntime.GOOS,
metav1.LabelArch: goruntime.GOARCH,
},
},
Spec: v1.NodeSpec{
Expand Down