-
Notifications
You must be signed in to change notification settings - Fork 4.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Updating kubelet config doesn't mark nodes as NeedsUpdate #3076
Comments
Yeah ... i've noticed this as well. Pretty much any of the componentConfig settings don't appear to show an update is required. |
The problem is due to the fact 'edit' and 'replace' command automatically update the bucket, so there isn't anything diff the current and updated version; which is somewhat of a massive problem if you want to drive everything from CI ... I was thinking about a solution the other day; a quick workaround to this would be to ensure the instanceGroup spec and perhaps some of the cluster spec makes it's way into userdata. As it's the one thing that retains the previous state and is detectable by the 'update' stage .. @justinsb what do you reckon? |
Some cluster changes such as component config modifications are not picked up when performing updates (nodes are not marked as NEEDUPDATE). This change introduces the ability to: Include certain cluster specs within the node user data file (enableClusterSpecInUserData: true) Encode the cluster spec string before placing within the user data file (enableClusterSpecInUserData: true) The above flags default to false so shouldn't cause any changes to existing clusters.
…hanges Automatic merge from submit-queue Add cluster spec to node user data so component config changes are detected Related to #3076 Some cluster changes such as component config modifications are not picked up when performing updates (nodes are not marked as `NEEDUPDATE`). This change introduces the ability to: 1. Include certain cluster specs within the node user data file ~(`enableClusterSpecInUserData: true`)~ 2. ~Encode the cluster spec string before placing within the user data file (`enableClusterSpecInUserData: true`)~ ~The above flags default to false so shouldn't cause any changes to existing clusters.~ Following feedback I've removed the optional API flags, so component config is included by default within the user data. This WILL cause all nodes to have a required update to their bootstrap scripts.
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/assign This is now resolved in the latest kops releases (v1.8.0 and up). |
I have edited cluster configuration with
kops edit cluster
- i've added there imageGC settings for kubelet. Afterwards the settings have been updated in config files in s3 bucket. However, settings aren't picked up by already running kubelet and nodes are not marked asNeedsUpdate
:I had to manually change IG settings of all instance groups to "force" mark nodes as NeedsUpdate, which allowed me to run
rolling-update
.The text was updated successfully, but these errors were encountered: