Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove volume.beta.kubernetes.io/storage-provisioner annotation #114804

Conversation

mengjiao-liu
Copy link
Member

@mengjiao-liu mengjiao-liu commented Jan 4, 2023

What type of PR is this?

/kind cleanup

What this PR does / why we need it:

The volume.beta.kubernetes.io/storage-provisioner annotation has been deprecated since v1.23 and uses volume.kubernetes.io/storage-provisioner instead.

Now we can remove beta annotation because the deprecation period has ended(more than a year and 3 minor releases).
see https://kubernetes.io/docs/reference/using-api/deprecation-policy/#deprecating-parts-of-the-api for more deprecation details.

Which issue(s) this PR fixes:

Ref:

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Action required: Remove the `volume.beta.kubernetes.io/storage-provisioner` annotation, deprecated since v1.23, and use `volume.kubernetes.io/storage-provisioner` instead.

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


/sig storage

/cc @jsafrane @msau42 @Jiawei0227

@k8s-ci-robot k8s-ci-robot added the release-note-action-required Denotes a PR that introduces potentially breaking changes that require user action. label Jan 4, 2023
@k8s-ci-robot k8s-ci-robot added kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. sig/storage Categorizes an issue or PR as relevant to SIG Storage. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jan 4, 2023
@k8s-ci-robot
Copy link
Contributor

@mengjiao-liu: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. sig/apps Categorizes an issue or PR as relevant to SIG Apps. labels Jan 4, 2023
@mengjiao-liu
Copy link
Member Author

/retest

@k8s-ci-robot
Copy link
Contributor

@mengjiao-liu: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-kubernetes-e2e-gce 1af1584 link true /test pull-kubernetes-e2e-gce
pull-kubernetes-e2e-kind-ipv6 1af1584 link true /test pull-kubernetes-e2e-kind-ipv6
pull-kubernetes-e2e-kind 1af1584 link true /test pull-kubernetes-e2e-kind
pull-kubernetes-e2e-gce-100-performance 1af1584 link true /test pull-kubernetes-e2e-gce-100-performance
pull-kubernetes-e2e-gce-storage-slow 1af1584 link false /test pull-kubernetes-e2e-gce-storage-slow

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@jsafrane
Copy link
Member

jsafrane commented Jan 4, 2023

It looks like the e2e test failures are genuine.

  • Tests with [Driver: nfs]: the external provisioner used in the tests needs to be updated to support volume.kubernetes.io/storage-provisioner.
  • StatefulSet Basic StatefulSet functionality tests use external-provisier rancher.io/local-path that needs to be updated too. Or a different provisioner must be used.

@msau42
Copy link
Member

msau42 commented Jan 4, 2023

Can we remove support for beta annotations? The deprecation policy says that beta annotations are part of the API, and a stable API cannot be removed: https://kubernetes.io/docs/reference/using-api/deprecation-policy/

@mengjiao-liu
Copy link
Member Author

mengjiao-liu commented Jan 5, 2023

  • Tests with [Driver: nfs]: the external provisioner used in the tests needs to be updated to support volume.kubernetes.io/storage-provisioner.

[Driver: nfs]: the external provisioner uses volume.beta.kubernetes.io/storage-provisioner annotations by using link library kubernetes-sigs/sig-storage-lib-external-provisioner, which has merged the PR for updating volume.beta.kubernetes.io/storage-provisioner annotation in version v8.0.0.

But [Driver: nfs]: the external provisioner is now using nfs-subdir-external-provisioner version v6.0.0, so https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner repo should update the version of library kubernetes-sigs/sig-storage-lib-external-provisioner to v8.0.0

PR kubernetes-sigs/nfs-subdir-external-provisioner#206 wants to update dependencies to the latest version but hasn't finished it yet.

StatefulSet Basic StatefulSet functionality tests use external-provisier rancher.io/local-path that needs to be updated too. Or a different provisioner must be used.

external-provisier rancher.io/local-path library for the same reason as above, using sigs.k8s.io/sig-storage-lib-external-provisioner v4.0.2-0.20200115000635-36885abbb2bd+incompatible rather than v8.0.0

@mengjiao-liu
Copy link
Member Author

mengjiao-liu commented Jan 5, 2023

Can we remove support for beta annotations? The deprecation policy says that beta annotations are part of the API, and a stable API cannot be removed: https://kubernetes.io/docs/reference/using-api/deprecation-policy/

In my understanding, beta annotations can be removed referring to https://kubernetes.io/docs/reference/using-api/deprecation-policy/.

The following rules govern the deprecation of elements of the API. This includes:
- Annotations on REST resources, including "beta" annotations but not including "alpha" annotations.

#102357 (comment)

Kubernetes will need to set both beta and GA annotations for quite some time, beta annotations have the same deprecation policy as regular fields 

@msau42
Copy link
Member

msau42 commented Jan 5, 2023

The way I read the deprecation policy is that the only way to "remove" the annotation is to introduce a new API version with it removed:

Rule #1: API elements may only be removed by incrementing the version of the API group.

So since PVC and PV are in core/v1, we could only remove the annotation in a core/v2, or consider moving it to a storage/v2.

@jsafrane wdyt?

@mengjiao-liu
Copy link
Member Author

mengjiao-liu commented Jan 6, 2023

Rule #1: API elements may only be removed by incrementing the version of the API group.

Once an API element has been added to an API group at a particular version, it can not be removed from that version or have its behavior significantly changed, regardless of track.
Rule #4a: API lifetime is determined by the API stability level

Beta API versions are deprecated no more than 9 months or 3 minor releases after introduction (whichever is longer), and are no longer served 9 months or 3 minor releases after deprecation (whichever is longer)

Indeed, it feels like rule#1 and rule#4a are a little confusing.

@jsafrane
Copy link
Member

jsafrane commented Jan 6, 2023

Beta API versions is like apiVersion: v2beta1. Annotations are like fields in v1 API. But we dropped support for fields in v1 API already, e.g. pv.spec.flocker. Can we remove the beta annotation this way?

Since even our e2e tests use the beta annotation, IMO the world is not yet ready for that. If we decide we can remove the annotation, can we throw a warning first (+ alert?), wait for couple of releases and then remove it?

@msau42
Copy link
Member

msau42 commented Jan 6, 2023

I think flocker is an extreme case because the entire project was discontinued.

I think we could definitely have warnings. I am still not sure we can remove it according to the policy.

@mengjiao-liu
Copy link
Member Author

Since even our e2e tests use the beta annotation, IMO the world is not yet ready for that. If we decide we can remove the annotation, can we throw a warning first (+ alert?), wait for couple of releases and then remove it?

Yes, I agree with throwing a warning/alert first.

Beta API versions is like apiVersion: v2beta1. Annotations are like fields in v1 API. But we dropped support for fields in v1 API already, e.g. pv.spec.flocker. Can we remove the beta annotation this way?

@liggitt Could you please help confirm this problem? We're a little unsure when we can remove this annotation.

@liggitt
Copy link
Member

liggitt commented Jan 9, 2023

The way I read the deprecation policy is that the only way to "remove" the annotation is to introduce a new API version with it removed:

Rule #1: API elements may only be removed by incrementing the version of the API group.

That rule refers to removing API fields, dropping existing data. In this case, existing data in annotations is still preserved, even if controllers stop actuating on that data. This PR isn't removing the ability to set, persist, and retrieve the annotation.

What happens if someone has an existing object with only the beta annotation, and doesn't update it. The PVC will sit, unprovisioned, until the GA annotation is added?

This seems in-bounds to drop from controller logic, but we should make sure we clean up as much usage as possible, communicate steps clearly.

I would ask questions like:

  1. is the annotation still being relied on by kubernetes provisioners?
  2. is the annotation still being set by kubernetes projects?
  3. is the annotation still being relied on by non-kubernetes provisioners?
  4. is the annotation still being set by non-kubernetes projects?
  5. what actions is someone with existing objects with the beta annotation required to take to non-disruptively update to the GA version? is it possible (are all objects that require updating mutable)? are those steps documented and are people in that situation pointed to those docs anywhere?

Answering those, cleaning up our usage to ensure we consistently honor the GA version with higher precedence than the beta annotation, and only set the GA version, opening issues for out-of-project uses we find, seem like good first steps.

@mengjiao-liu
Copy link
Member Author

mengjiao-liu commented Jan 31, 2023

is the annotation still being relied on by kubernetes provisioners?

No, but kubernetes-sigs provisioner nfs-subdir-external-provisioner use it. The provisioner does not update the dependency library kubernetes-sigs/sig-storage-lib-external-provisioner to v8.0.0+.

PR kubernetes-sigs/nfs-subdir-external-provisioner#258 is working on updateing it.

is the annotation still being set by kubernetes projects?

No, but kubernetes-sigs provisioner nfs-subdir-external-provisioner use it. The provisioner does not update the dependency library kubernetes-sigs/sig-storage-lib-external-provisioner to v8.0.0+.

is the annotation still being relied on by non-kubernetes provisioners?

From the e2e tests,rancher.io/local-path provisioner depends on it because rancher.io/local-path provisioner does not update the dependency library sigs/sig-storage-lib-external-provisioner to v8.0.0+.

what actions is someone with existing objects with the beta annotation required to take to non-disruptively update to the GA version? is it possible (are all objects that require updating mutable)? are those steps documented and are people in that situation pointed to those docs anywhere?

I looked at this part of the Kubernetes code. This annotation is added to PVC automatically, not manually. Therefore, if the provisioner uses the correct dependency library, it can be smoothly upgraded to the GA version. In the dependency library https://github.com/kubernetes-sigs
sig-storage-lib-external-provisioner now supports both bata and GA version annotations(volume.beta.kubernetes.io/storage-provisioner and volume.kubernetes.io/storage-provisioner).

@divyenpatel @jsafrane help confirm.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 15, 2023
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle stale
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 13, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Mark this PR as fresh with /remove-lifecycle rotten
  • Close this PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 13, 2023
@mengjiao-liu
Copy link
Member Author

is the annotation still being relied on by kubernetes provisioners?

No, but kubernetes-sigs provisioner nfs-subdir-external-provisioner use it. The provisioner does not update the dependency library kubernetes-sigs/sig-storage-lib-external-provisioner to v8.0.0+.

PR kubernetes-sigs/nfs-subdir-external-provisioner#258 is working on updateing it.

is the annotation still being set by kubernetes projects?

No, but kubernetes-sigs provisioner nfs-subdir-external-provisioner use it. The provisioner does not update the dependency library kubernetes-sigs/sig-storage-lib-external-provisioner to v8.0.0+.

is the annotation still being relied on by non-kubernetes provisioners?

From the e2e tests,rancher.io/local-path provisioner depends on it because rancher.io/local-path provisioner does not update the dependency library sigs/sig-storage-lib-external-provisioner to v8.0.0+.

what actions is someone with existing objects with the beta annotation required to take to non-disruptively update to the GA version? is it possible (are all objects that require updating mutable)? are those steps documented and are people in that situation pointed to those docs anywhere?

I looked at this part of the Kubernetes code. This annotation is added to PVC automatically, not manually. Therefore, if the provisioner uses the correct dependency library, it can be smoothly upgraded to the GA version. In the dependency library https://github.com/kubernetes-sigs sig-storage-lib-external-provisioner now supports both bata and GA version annotations(volume.beta.kubernetes.io/storage-provisioner and volume.kubernetes.io/storage-provisioner).

Process:

@mengjiao-liu
Copy link
Member Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 13, 2023
@dims dims added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 24, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@mengjiao-liu
Copy link
Member Author

Need to wait for the PR update of the third-party repo.
/reopen

@k8s-ci-robot k8s-ci-robot reopened this Feb 4, 2024
@k8s-ci-robot
Copy link
Contributor

@mengjiao-liu: Reopened this PR.

In response to this:

Need to wait for the PR update of the third-party repo.
/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: mengjiao-liu
Once this PR has been reviewed and has the lgtm label, please assign xing-yang for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closed this PR.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the PR is closed

You can:

  • Reopen this PR with /reopen
  • Mark this PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. release-note-action-required Denotes a PR that introduces potentially breaking changes that require user action. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

None yet

7 participants