-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrading from 1.15.3, 1.16.2 and v1.17.0-beta.2 to v1.18.0-alpha.3 doesn't get the etcd-manager downgraded to 3.0.20200307 from 3.0.20200429 #9159
Comments
@nullzone as explained in the advisory, 1.18.0-alpha.3 has the same etcd-manager version as 1.17.0-beta.2, so no downgrade will happen: The overloading fix should also be present in the 3.0.20200429 version. Isn't it working? |
I can see the ENV variable ETCD_LOG_LEVEL defined and available in the etcd-manager containers. On the other hand, the amount of log entries generated by the etcd-managers keeps being exactly the same.
I am assuming that the example posted by Peter (#7859 (comment)) was right and that ENV variable should overwrite any other variable passed as parameter when running the containers. |
OK, it seems the issue is related to the etcd-manager version being used in my cluster. |
No worries, glad you found the reason. To help: spec:
etcdClusters:
- cpuRequest: 200m
etcdMembers:
- instanceGroup: master-a
name: a
memoryRequest: 100Mi
name: main
version: 3.4.3
- cpuRequest: 100m
etcdMembers:
- instanceGroup: master-a
name: a
memoryRequest: 100Mi
name: events
version: 3.4.3
|
Wonderful. Thank you! |
1. What
kops
version are you running? The commandkops version
, will displaythis information.
Version 1.18.0-alpha.3 (git-27aab12b2)
2. What Kubernetes version are you running?
kubectl version
will print theversion if a cluster is running or provide the Kubernetes version specified as
a
kops
flag.Client Version: v1.17.5
Server Version: v1.17.5
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
a) Upgraded the kops client from v1.17.0-beta.2 to v1.18.0-alpha.3
b) Edited the cluster configuration running 'kops edit cluster' to add the following new values which were correctly accepted and added to the manifest:
5. What happened after the commands executed?
Everything went apparently well after running 'kops cluster upgrade' but the etcd-manager weren't downgraded from 3.0.20200429 to 3.0.20200307
6. What did you expect to happen?
After and successful upgrade to kops v1.17.0-beta.2 days ago, the etcd-manager got upgraded to the version 3.0.20200429.
Now, upgrading kops to the version 1.18.0-alpha.3 should have got the etcd-manager downgraded to 3.0.20200307 as that version is the one ready to start overloading ENV var in manifest for the etcd-manager ( #8692 ).
On the other hand, I understand that the current release contains a critical update to etcd-manager (1 year after creation...) so one new release with both patches applied should be required (#8692 + #9016).
**7. Please provide your cluster manifest. Execute
It is not necessary
**8. Please run the commands with most verbose logging by adding the
-v 10
flag.It is not necessary
9. Anything else do we need to know?
Following this upgrade path seems to require to get a new etcd-manager version to be released with a tag higher than 3.0.20200429 which should be used in the following kops 1.18 release or any kops upgrade coming from the versions 1.15.3, 1.16.2 and v1.17.0-beta.2 with keep accepting unused values in the manifest.
Am I right?
It is important to mention that the current Changelog for the kops release 1.18.0-alpha3 has two different entries related with updating the etcd-manager container image:
Update etcd-manager to 3.0.20200307 @justinsb #8692
.
Update to etcd-manager 3.0.20200429 @justinsb #9016
The text was updated successfully, but these errors were encountered: