Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Updating kubelet config doesn't mark nodes as NeedsUpdate #3076

Closed
marek-obuchowicz opened this issue Jul 28, 2017 · 4 comments
Closed

Updating kubelet config doesn't mark nodes as NeedsUpdate #3076

marek-obuchowicz opened this issue Jul 28, 2017 · 4 comments
Assignees
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@marek-obuchowicz
Copy link

I have edited cluster configuration with kops edit cluster - i've added there imageGC settings for kubelet. Afterwards the settings have been updated in config files in s3 bucket. However, settings aren't picked up by already running kubelet and nodes are not marked as NeedsUpdate:

Using cluster from kubectl context: p1.hidden.xxx

NAME			STATUS	NEEDUPDATE	READY	MIN	MAX	NODES
master-us-east-1a	Ready	0		1	1	1	1
master-us-east-1c	Ready	0		1	1	1	1
master-us-east-1e	Ready	0		1	1	1	1
nodes			Ready	0		3	3	3	3

I had to manually change IG settings of all instance groups to "force" mark nodes as NeedsUpdate, which allowed me to run rolling-update.

@gambol99
Copy link
Contributor

Yeah ... i've noticed this as well. Pretty much any of the componentConfig settings don't appear to show an update is required.

@gambol99
Copy link
Contributor

gambol99 commented Aug 1, 2017

The problem is due to the fact 'edit' and 'replace' command automatically update the bucket, so there isn't anything diff the current and updated version; which is somewhat of a massive problem if you want to drive everything from CI ... I was thinking about a solution the other day; a quick workaround to this would be to ensure the instanceGroup spec and perhaps some of the cluster spec makes it's way into userdata. As it's the one thing that retains the previous state and is detectable by the 'update' stage .. @justinsb what do you reckon?

gambol99 added a commit to UKHomeOffice/kops that referenced this issue Aug 3, 2017
Some cluster changes such as component config modifications are not picked up when performing updates (nodes are not marked as NEEDUPDATE). This change introduces the ability to:

Include certain cluster specs within the node user data file (enableClusterSpecInUserData: true)
Encode the cluster spec string before placing within the user data file (enableClusterSpecInUserData: true)
The above flags default to false so shouldn't cause any changes to existing clusters.
k8s-github-robot pushed a commit that referenced this issue Aug 11, 2017
…hanges

Automatic merge from submit-queue

Add cluster spec to node user data so component config changes are detected

Related to #3076 

Some cluster changes such as component config modifications are not picked up when performing updates (nodes are not marked as `NEEDUPDATE`). This change introduces the ability to:
1. Include certain cluster specs within the node user data file ~(`enableClusterSpecInUserData: true`)~
2. ~Encode the cluster spec string before placing within the user data file (`enableClusterSpecInUserData: true`)~

~The above flags default to false so shouldn't cause any changes to existing clusters.~

Following feedback I've removed the optional API flags, so component config is included by default within the user data. This WILL cause all nodes to have a required update to their bootstrap scripts.
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 2, 2018
@KashifSaadat
Copy link
Contributor

/assign
/close

This is now resolved in the latest kops releases (v1.8.0 and up).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

5 participants