-
Notifications
You must be signed in to change notification settings - Fork 743
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
update dependencies to latest version #258
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: humblec The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/assign @yonatankahana |
@humblec will this PR be merged anytime soon? |
b1c3fb1
to
d7cff69
Compare
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
Signed-off-by: Humble Chirammal <hchiramm@redhat.com>
@yonatankahana can you please review this PR ? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mostly looks good to me except the image tag change. I'll approve once amended. thanks
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@humblec
tested on kubernetes 1.25.3 and it failed to start with the error:
F0411 00:34:32.671804 1 controller.go:878] Error creating lock: endpoints lock is removed, migrate to endpointsleases
seems like it already fixed in sig-storage-lib-external-provisioner
(https://github.com/kubernetes-sigs/sig-storage-lib-external-provisioner/blob/cabcbb0869a16b267bd3c260c70d976495606526/controller/controller.go#L868) and released in v9.0.0. maybe we should even use v9.0.2 already.
please, test your commits before pushing them...
n.b: we claim to support kubernetes >= 1.9, we must test if it still true after these changes before merge.
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
@yonatankahana I have updated to v9.0.2 and also tested on 1.26.3 cluster and controller runs.
I don't see any point to support versions really OLD! these Kubernetes versions ( v1.9 to v1.23) are not even supported from the community, even if we think about some kind of extended support from a specific distro/vendor may be a couple of versions could be added to n-3 , but v1.9 to 1.21 support looks an unwanted claim! |
@yonatankahana @jackielii can we get this in ? |
i agree, but we must understand what those changes means in terms of backward compatibility, those are probably breaking changes.... |
Any updates on this? |
Signed-off-by: Humble Chirammal <humble.devassy@gmail.com>
@yonatankahana users are free to use the existing version of provisioner if they want to stick to old or if they are facing issue with latest version of the provisioner in VERY OLD clusters. The new clusters can definitely use the provisioner which has been updated.. Also I don't think any user still exist with versions like Kube v1.9 or something below v1.23 ( ie 4 releases back to latest) . However, I have placed the version as |
@@ -1,10 +1,10 @@ | |||
apiVersion: v1 | |||
appVersion: 4.0.2 | |||
appVersion: 4.0.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only version should be bumped, appVersion is the image tag (4.0.2)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was deliberate , so that, the image ( tag) remain consistent for older deployment.. the upgrade can be version and image update to really land on new versions. @yonatankahana .
@humblec imo if it's compatible there is no reason to restrict installation on old clusters. I myself still maintaining more than 20 clusters with the nfs subdir external provisioner installed on kubernetes v1.17 and I will be very happy to get those trivy CVE's resolved. I do agree that we don't need to actively support such old versions but why just change the version arbitrarily? why not check it fully? where are we rushing to? nothing is blocked, such big changes require deep look, i can't just merge 1.3k lines patches. look how much changes done since PR created. it takes time - we all volunteer in our free time 😃 tbh, i will feel much safer to merge #287 to solve the original issue, release a new version, and only then merge this PR. |
description: nfs-subdir-external-provisioner is an automatic provisioner that used your *already configured* NFS server, automatically creating Persistent Volumes. | ||
name: nfs-subdir-external-provisioner | ||
home: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner | ||
version: 4.0.18 | ||
kubeVersion: ">=1.9.0-0" | ||
version: 4.0.20 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
changing the version will cause chart release, better to change the kubeVersion only when we will actually release the chart with those restrictions, currently it will be redundant since image and chart are same.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
new chart release was the plan with new image tag as mentioned above. so that the versions remain different and users can upgrade when they want.
I am fine to merge 287 and release new version. However the justifications for keeping this PR in queue for long time does not look good to me :), any way, we will get this in on top of 287 and a new release. 👍 |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Please rebase to resolve the conflict. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
This PR is a year old. Can someone open another PR that closes this one, so we can stop dragging this out? |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close |
@k8s-triage-robot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Fix #206
Signed-off-by: Humble Chirammal hchiramm@redhat.com