-
Notifications
You must be signed in to change notification settings - Fork 261
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Saving unchanged cluster leads to update status and cluster changes #6881
Comments
Potentially a blocker, adding label to discuss. |
Just reproduced it. Also noticed the same thing happens if I save the cluster from the edit config form without switching to YAML. |
@rmweir When I go to edit a cluster, click Edit as YAML, and click Show Diff, I see these changes to And this is the diff for the initial save of a single-node cluster: Should the UI wait to apply that change until after the user has changed something? Or is the problem that these values were not appropriately applied when the cluster was first provisioned? After saving the cluster that way, if I save it again, it doesn't add more values or put the cluster in an updating state, because the values are only added if they don't already exist. |
Gathered a little more information:
if ( !entry.pool.hostnamePrefix ) {
entry.pool.hostnamePrefix = `${ prefix }-`;
}
data() {
if ( isEmpty(this.value?.spec?.localClusterAuthEndpoint) ) {
set(this.value, 'spec.localClusterAuthEndpoint', {
enabled: false,
caCerts: '',
fqdn: '',
});
}
const DEFAULTS = {
deleteEmptyDirData: false, // Show; Kill pods using emptyDir volumes and lose the data
disableEviction: false, // Hide; false = evict pods, true = delete pods
enabled: false, // Show; true = Nodes must be drained before upgrade; false = YOLO
force: false, // Show; true = Delete standalone pods, false = fail if there are any
gracePeriod: -1, // Show; Pod shut down time, negative value uses pod default
ignoreDaemonSets: true, // Hide; true = work, false = never work because there's always daemonSets
ignoreErrors: false, // Hide; profit?
skipWaitForDeleteTimeoutSeconds: 0, // Hide; If the pod deletion time is older than this > 0, don't wait, for some reason
timeout: 120, // Show; Give up after this many seconds
}; |
Taking |
@catherineluse after waiting a while the fields get over written and saving does the same thing again. I'm not sure, we would have to compare to what the behavior was before this started happening. I think the UI should not be changing anything about the cluster, if there is an empty/nil value it should be left as is. |
@rmweir What's the Rancher version where you remember it working properly? |
@catherineluse I don't know which rancher version worked properly. I just know it's in 2.6.6+ at least. |
@catherineluse indicated that backend couldn't repro this consistently though they do see some issue. They've pushed their corresponding issue to 2.7.1, so we'll do the same. |
Sure I can look @thaneunsoo; did you see this on RKE1, RKE2, and K3s? |
@mantis-toboggan-md yes, for rke1,rke2,k3s |
Interesting: RKE1 provisioning is done through the old UI so there's either the same problem in both UIs or there is another problem w/ editing but not changing anything on the backend... I'm not seeing the behaviour you are on my own setup but it is older; I'm making a fresh one and looking into this further now |
@thaneunsoo I was able to repro this behavior with rke2 and k3s; a fix is now merged: #7218 What I saw specifically was that the rke2 or k3s cluster would go into an updating state after saving from a different view than the cluster had previously been saved in. So: provision a cluster without hitting 'view as yaml', go to 'edit config', then click 'view as yaml', save, the cluster updates despite you not changing anything. Again go to 'edit config', click 'view as yaml', save, the cluster does not change state. Go to 'edit config', save without clicking 'view as yaml', the cluster goes into updating...you get the idea I'm having a harder time reproducing this consistently with RKE1. Could we potentially file a separate issue for that @thaneunsoo ? |
Test Environment:Rancher version: v2.7-head 9f1e043 Downstream cluster type: RKE2/K3s Testing:
Results |
@gaktive @nwmac , we will need this fixed in 2.6.10 as well. I've tried to create backport, but it looks like backport/forwardport bot is not available in this repo. |
Setup
2.6.6 and 2.6.7 backend pointing at latest UI for dev.
Describe the bug
Saving unchanged cluster leads to update status and cluster changes.
To Reproduce
Result
Fields are added and the cluster goes into updating.
Expected Result
The cluster should remain unchanged and should not go into updating.
Additional context
The text was updated successfully, but these errors were encountered: