Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Portworx file in-tree to CSI driver migration #2589

Open
2 of 8 tasks
trierra opened this issue Mar 31, 2021 · 50 comments
Open
2 of 8 tasks

Portworx file in-tree to CSI driver migration #2589

trierra opened this issue Mar 31, 2021 · 50 comments
Labels
sig/storage Categorizes an issue or PR as relevant to SIG Storage. stage/beta Denotes an issue tracking an enhancement targeted for Beta status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team
Milestone

Comments

@trierra
Copy link
Contributor

trierra commented Mar 31, 2021

Enhancement Description

Parent enhancement: #625

Please keep this description up to date. This will help the Enhancement Team to track the evolution of the enhancement efficiently.

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Mar 31, 2021
@trierra
Copy link
Contributor Author

trierra commented Mar 31, 2021

/sig storage

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Mar 31, 2021
@fejta-bot
Copy link

fejta-bot commented Jun 29, 2021

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 29, 2021
@trierra
Copy link
Contributor Author

trierra commented Jun 29, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 29, 2021
@SergeyKanzhelev
Copy link
Member

SergeyKanzhelev commented Jul 8, 2021

was this enhancement approved for 1.22?

@trierra
Copy link
Contributor Author

trierra commented Jul 8, 2021

@SergeyKanzhelev
Copy link
Member

SergeyKanzhelev commented Jul 8, 2021

I cannot find the KEP here: https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage and this issue is not in the milestone. I'm confused.

@trierra
Copy link
Contributor Author

trierra commented Jul 8, 2021

@trierra
Copy link
Contributor Author

trierra commented Jul 8, 2021

oh, I see what you mean. Apparently, it's not approved.

@SergeyKanzhelev
Copy link
Member

SergeyKanzhelev commented Jul 9, 2021

I was told KEP is not needed,

It may not be. Change seems to be quite straightforward and likely already covered with some other KEP. Either way the code freeze for 1.22 is in 40 minutes. There is still some work left on the PR

@salaxander
Copy link

salaxander commented Aug 31, 2021

/milestone v1.23

@k8s-ci-robot k8s-ci-robot added this to the v1.23 milestone Aug 31, 2021
@salaxander salaxander added stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team labels Aug 31, 2021
@lauralorenz
Copy link
Contributor

lauralorenz commented Sep 7, 2021

Hi @trierra! 1.23 Enhancements team here. Just checking in as we approach enhancements freeze at 11:59pm PST on Thursday 09/09.

It looks like the proposal was written using the design proposal format, but as of release 1.14, they now must use the KEP format which among other things includes a metadata file that is used by some release automation. In particular as of 1.19 a particular approval process called PRR review is also required which must be requested for each KEP separately.

This is all to say that though I've included the formal checklist used by enhancements team below, I think in reality you will need to migrate this into the KEP format and request a PRR reviewer for us to accept it into a milestone.

I also notice from the OP and prior comments that there is a parent enhancement which is in KEP format. You may wish to close this enhancements issue in favor of that one, *IF* that KEP is the way that your SIG is targeting milestones and communicating with enhancements team for this feature. (I didn't read the proposals in detail so apologies if that is wildly incorrect.)

Thanks!!

--

Here's where this enhancement currently stands:

  • Updated KEP file using the latest template has been merged into the k/enhancements repo.
  • KEP status is marked as implementable
  • KEP has a test plan section filled out.
  • KEP has up to date graduation criteria.
  • KEP has a production readiness review that has been completed and merged into k/enhancements.

@msau42
Copy link
Member

msau42 commented Sep 8, 2021

@lauralorenz This is the KEP that we're using to track all csi migration implementations: https://github.com/kubernetes/enhancements/tree/master/keps/sig-storage/625-csi-migration

The design and test plan is the same for each cloud provider, the only difference is that each cloud provider has its own feature gate and implementation and own release timeline.

@Jiawei0227
Copy link
Contributor

Jiawei0227 commented Sep 8, 2021

A dedicated KEP for this is out for review: #2964

@lauralorenz
Copy link
Contributor

lauralorenz commented Sep 9, 2021

Thanks everyone! Between comments here, comments on the new KEP PR, and the discussion in Slack in #sig-release, just confirming from my side that the new PR #2964 meets the criteria for enhancements freeze (using latest KEP template, set as implementable, inherits its parent's filled out test plan section, inherits its parent's up to date graduation criteria, and has a PRR review file artifact matching this enhancement issue number) so as long as it is merged by the deadline tonight at 11:59pm PST we are good to go!

@salaxander
Copy link

salaxander commented Sep 10, 2021

Hi, 1.23 Enhancements Lead here 👋. With enhancements freeze now in effect, this enhancement has not met the criteria for the freeze and has been removed from the milestone.

As a reminder, the criteria for enhancements freeze is:

  • KEP is merged into k/enhancements repo with up to date latest milestone and stage.
  • KEP status is marked as implementable.
  • KEP has a test plan section filled out.
  • KEP has up to date graduation criteria.
  • KEP has a production readiness review for the correct stage that has been completed and merged into k/enhancements.

Feel free to file an exception to add this back to the release. If you plan to do so, please file this as early as possible.

Thanks!
/milestone clear

@k8s-ci-robot k8s-ci-robot removed this from the v1.23 milestone Sep 10, 2021
@salaxander salaxander added tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team and removed tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team labels Sep 10, 2021
@xing-yang
Copy link
Contributor

xing-yang commented Sep 15, 2021

Thanks @salaxander! We'll be filing an exception soon.
CC @Jiawei0227 @msau42

@ramrodo
Copy link
Member

ramrodo commented Sep 22, 2021

Hi @trierra 👋 1.23 Docs shadow here.

This enhancement is marked as 'Needs Docs' for the 1.23 release.

Please follow the steps detailed in the documentation to open a PR against the dev-1.23 branch in the k/website repo. This PR can be just a placeholder at this time and must be created before Thu November 18, 11:59 PM PDT.

Also, if needed take a look at Documenting for a release to familiarize yourself with the docs requirement for the release.

Thanks!

@trierra
Copy link
Contributor Author

trierra commented Apr 1, 2022

  1. Provisioned a pvc with the in-tree driver

  2. Installed an app and ran some workload

  3. Deleted the app, pvc is left provisioned

  4. enabled migration

  5. created new PVC using old storageclass. Checked that CSI driver has picked up the volume provisioning

  6. installed app back using the same PVC. Checked that it works with no issues

oksana@dev-onaumov ~/g/s/g/t/deployment (master)> k get pvc
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
px-sv4test-pvc     Bound    pvc-cbc69f40-bab5-46c7-b4c4-a9bd0a536bce   1Gi        RWX            px-sv4test-sc   77m
px-sv4test-pvc-1   Bound    pvc-6911f24f-94b2-447b-b8e9-61f56cb59167   1Gi        RWX            px-sv4test-sc   92s
oksana@dev-onaumov ~/g/s/g/t/deployment (master)> k get storageclass -oyaml
apiVersion: v1
items:
- allowVolumeExpansion: true
  apiVersion: storage.k8s.io/v1
  kind: StorageClass
  metadata:
    annotations:
      kubectl.kubernetes.io/last-applied-configuration: |
        {"allowVolumeExpansion":true,"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"px-sv4test-sc"},"parameters":{"repl":"3","sharedv4":"true","sharedv4_svc_type":"ClusterIP"},"provisioner":"kubernetes.io/portworx-volume"}
    creationTimestamp: "2022-03-30T21:01:48Z"
    name: px-sv4test-sc
    resourceVersion: "17339"
    uid: 84de1962-a8e4-4e3a-93b5-f337119d470e
  parameters:
    repl: "3"
    sharedv4: "true"
    sharedv4_svc_type: ClusterIP
  provisioner: kubernetes.io/portworx-volume
  reclaimPolicy: Delete
  volumeBindingMode: Immediate
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
oksana@dev-onaumov ~/g/s/g/t/deployment (master)>

CSI provisioner logs

I0330 20:56:01.411030       1 controller.go:859] Started provisioner controller pxd.portworx.com_px-csi-ext-67c98559b9-5b72p_95831aab-e96b-4eb7-932b-993dfd30dcba!
I0330 22:18:02.489797       1 controller.go:1279] provision "default/px-sv4test-pvc-1" class "px-sv4test-sc": started
I0330 22:18:02.495544       1 controller.go:520] translating storage class for in-tree plugin kubernetes.io/portworx-volume to CSI
I0330 22:18:02.504926       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"px-sv4test-pvc-1", UID:"6911f24f-94b2-447b-b8e9-61f56cb59167", APIVersion:"v1", ResourceVersion:"27283", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/px-sv4test-pvc-1"

I0330 22:18:04.352921       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"px-sv4test-pvc-1", UID:"6911f24f-94b2-447b-b8e9-61f56cb59167", APIVersion:"v1", ResourceVersion:"27283", FieldPath:""}): type: 'Warning' reason: 'ProvisioningFailed' failed to provision volume with StorageClass "px-sv4test-sc": rpc error: code = Internal desc = Failed to create volume: portworx volume with same name: pvc-6911f24f-94b2-447b-b8e9-61f56cb59167 already exists. The volume has different properties

@trierra
Copy link
Contributor Author

trierra commented Apr 1, 2022

PVC resize testing:

oksana@dev-onaumov ~/g/s/g/t/deployment (master)> k get pvc
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
px-sv4test-pvc     Bound    pvc-cbc69f40-bab5-46c7-b4c4-a9bd0a536bce   1Gi        RWX            px-sv4test-sc   2d1h
px-sv4test-pvc-2   Bound    pvc-851a7135-9c98-4ff5-b925-d29adb2e3a1b   3Gi        RWX            px-sv4test-sc   2m53s
oksana@dev-onaumov ~/g/s/g/t/deployment (master)> k edit pvc px-sv4test-pvc-2
persistentvolumeclaim/px-sv4test-pvc-2 edited
oksana@dev-onaumov ~/g/s/g/t/deployment (master)> k get pvc
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    AGE
px-sv4test-pvc     Bound    pvc-cbc69f40-bab5-46c7-b4c4-a9bd0a536bce   1Gi        RWX            px-sv4test-sc   2d1h
px-sv4test-pvc-2   Bound    pvc-851a7135-9c98-4ff5-b925-d29adb2e3a1b   5Gi        RWX            px-sv4test-sc   3m23s

External resizer logs:

oksana@dev-onaumov ~/g/s/g/t/deployment (master) [SIGINT]> klog --tail=5 px-csi-ext-74b4c8ccc4-rl4wc csi-resizer -n kube-system
I0401 22:02:45.475463       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"px-sv4test-pvc-2", UID:"851a7135-9c98-4ff5-b925-d29adb2e3a1b", APIVersion:"v1", ResourceVersion:"395626", FieldPath:""}): type: 'Normal' reason: 'Resizing' External resizer is resizing volume pvc-851a7135-9c98-4ff5-b925-d29adb2e3a1b
I0401 22:02:45.615308       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"px-sv4test-pvc-2", UID:"851a7135-9c98-4ff5-b925-d29adb2e3a1b", APIVersion:"v1", ResourceVersion:"395626", FieldPath:""}): type: 'Normal' reason: 'VolumeResizeSuccessful' Resize volume succeeded
I0401 22:04:26.686509       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"px-sv4test-pvc-2", UID:"851a7135-9c98-4ff5-b925-d29adb2e3a1b", APIVersion:"v1", ResourceVersion:"395846", FieldPath:""}): type: 'Normal' reason: 'Resizing' External resizer is resizing volume pvc-851a7135-9c98-4ff5-b925-d29adb2e3a1b
I0401 22:04:26.752260       1 event.go:285] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"px-sv4test-pvc-2", UID:"851a7135-9c98-4ff5-b925-d29adb2e3a1b", APIVersion:"v1", ResourceVersion:"395846", FieldPath:""}): type: 'Normal' reason: 'VolumeResizeSuccessful' Resize volume succeeded```

@trierra
Copy link
Contributor Author

trierra commented Apr 1, 2022

Storageclass

oksana@dev-onaumov ~/g/s/g/t/deployment (master) [SIGINT]> k get storageclass
NAME                    PROVISIONER                     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
portworx-sc (default)   kubernetes.io/portworx-volume   Delete          Immediate           true                   58m
px-sv4test-sc           kubernetes.io/portworx-volume   Delete          Immediate           true                   2d1h
oksana@dev-onaumov ~/g/s/g/t/deployment (master)>

@trierra
Copy link
Contributor Author

trierra commented Apr 1, 2022

@msau42 above are the test results for review

@xing-yang
Copy link
Contributor

xing-yang commented Apr 1, 2022

@trierra This feature is still Alpha:
https://github.com/kubernetes/kubernetes/blob/v1.24.0-beta.0/pkg/features/kube_features.go#L927

I have not seen a code PR that moves the feature gate to beta. It has missed the code freeze deadline. Please try to target Beta off by default in 1.25 release. For the test results, please provide as part of your code PR, not here in the enhancement issue. Also we need to have test results from e2e tests. Thanks.

@xing-yang xing-yang modified the milestones: v1.24, v1.25 Apr 1, 2022
@trierra
Copy link
Contributor Author

trierra commented Apr 4, 2022

@xing-yang thanks for the feedback!

@Priyankasaggu11929
Copy link
Member

Priyankasaggu11929 commented Jun 9, 2022

Hello @trierra 👋, 1.25 Enhancements team here.

Just checking in as we approach enhancements freeze on 18:00 PST on Thursday June 16, 2022.

For note, This enhancement is targeting for stage beta for 1.25 (correct me, if otherwise)

Here's where this enhancement currently stands: (updated on June 9, 2022)

  • KEP file using the latest template has been merged into the k/enhancements repo.
  • KEP status is marked as implementable
  • KEP has a updated detailed test plan section filled out
  • KEP has up to date graduation criteria
  • KEP has a production readiness review that has been completed and merged into k/enhancements.

Looks like for this one, we would need to just update the open PR #3345 to add anupdated detailed test plan section and have it merged.

For note, the status of this enhancement is marked as at risk. Thank you for keeping the issue description up-to-date!

@Priyankasaggu11929
Copy link
Member

Priyankasaggu11929 commented Jun 17, 2022

With KEP PR #3345 merged, the enhancement is ready for the upcoming enhancements freeze. 🚀 Thank you!

@Priyankasaggu11929
Copy link
Member

Priyankasaggu11929 commented Jun 19, 2022

Apologies for the noise ^

Hello @trierra 👋, we would require mentioning the unit-tests in the format specified in the updated test plan section template.

Could you please update the Test Plan section in the merged KEP as per the format required?

Please plan to get it finished by the enhancements freeze on Thursday, June 23rd, 2022. Thank you so much!

@trierra
Copy link
Contributor Author

trierra commented Jun 29, 2022

@Priyankasaggu11929 see the PR here #3428

@Priyankasaggu11929
Copy link
Member

Priyankasaggu11929 commented Jun 30, 2022

Thanks so much for the update, @trierra. 🙂

@krol3
Copy link

krol3 commented Jul 6, 2022

Hello @trierra 👋, 1.25 Release Docs shadow here.
This enhancement is marked as ‘Needs Docs’ for 1.25 release.

Please follow the steps detailed in the documentation to open a PR against dev-1.25 branch in the k/website repo. This PR can be just a placeholder at this time, and must be created by August 4.
 Also, take a look at Documenting for a release to familiarize yourself with the docs requirement for the release. 


Thank you!

@trierra
Copy link
Contributor Author

trierra commented Jul 12, 2022

hi @krol3, should I branch out from master or dev-1.24? I did from master and ended up with this conflict kubernetes/website#34886

@marosset
Copy link
Contributor

marosset commented Jul 25, 2022

Hi @trierra 👋

1.25 enhancements team here. Checking in once more as we approach 1.25 code freeze at 01:00 UTC on Wednesday, 3rd August 2022.

It looks like with kubernetes/kubernetes#110411 this enhancement is on track for graduation to beta with the 1.25 release.

If there are any other related PRs, can you please link them to this issue and also update the initial issue description with links to all applicable PRs?

As always, we are here to help should questions come up.

Thanks!!

@krol3
Copy link

krol3 commented Jul 27, 2022

hi @krol3, should I branch out from master or dev-1.24? I did from master and ended up with this conflict kubernetes/website#34886

Hi @trierra how are you? Do you mean dev-1.25? Weekly, we keep update the dev-1.25 with the main branch. Any doubt please contact me.

@trierra
Copy link
Contributor Author

trierra commented Jul 27, 2022

Hi @marosset. It seems like all my PRs are merged, except this one kubernetes/website#34886 (comment)

@marosset
Copy link
Contributor

marosset commented Jul 27, 2022

Hi @marosset. It seems like all my PRs are merged, except this one kubernetes/website#34886 (comment)

Great!
Website updates aren't subject to the upcoming code-freeze deadline so this enhancement is still on track.
Thanks for updating the Issue description too!

@xing-yang xing-yang added stage/beta Denotes an issue tracking an enhancement targeted for Beta status and removed stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status labels Aug 2, 2022
@rhockenbury rhockenbury added tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team and removed tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team labels Sep 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/storage Categorizes an issue or PR as relevant to SIG Storage. stage/beta Denotes an issue tracking an enhancement targeted for Beta status tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team
Projects
None yet
Development

No branches or pull requests