Skip to content

Added support for NVME Hyperdisks on Windows Nodes #2114

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

ivanLeont
Copy link

What type of PR is this?

Uncomment only one /kind <> line, hit enter to put that in a new line, and remove leading whitespaces from that line:

/kind api-change
/kind bug
/kind cleanup
/kind design
/kind documentation
/kind failing-test

/kind feature

/kind flake

What this PR does / why we need it:

This PR fixes the ability to use hyperdisks/nvme drives on window's nodes. This works by having the CSI node query Google's API to extract the correct serial number of the hyperdisk and match it to the correct index/serial of the mounted disk on windows.

Which issue(s) this PR fixes:

Fixes # N/A

Special notes for your reviewer:

Does this PR introduce a user-facing change?:

NONE

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. kind/feature Categorizes issue or PR as related to a new feature. labels Jun 17, 2025
Copy link

linux-foundation-easycla bot commented Jun 17, 2025

CLA Signed


The committers listed above are authorized under a signed CLA.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. label Jun 17, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: ivanLeont
Once this PR has been reviewed and has the lgtm label, please assign pwschuurman for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Welcome @ivanLeont!

It looks like this is your first PR to kubernetes-sigs/gcp-compute-persistent-disk-csi-driver 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/gcp-compute-persistent-disk-csi-driver has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

Hi @ivanLeont. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. and removed cncf-cla: no Indicates the PR's author has not signed the CNCF CLA. labels Jun 17, 2025
@ivanLeont
Copy link
Author

Adding a note here: This still works even if the GKE cluster has Workload Identity, as the Daemonset kube-system/gke-metadata-server that manages metadata access, seems to allow the SA pdcsi-node-sa (which is used by the pdcsi-node-windows Daemonset) to bypass all metadata restrictions, which compared to a user created SA has more restricted access to the metadata. I have confirmed this by creating a pod that uses the SA pdcsi-node-sa and was able to access the necessary disk data.

@mattcary
Copy link
Contributor

Adding a note here: This still works even if the GKE cluster has Workload Identity, as the Daemonset kube-system/gke-metadata-server that manages metadata access, seems to allow the SA pdcsi-node-sa (which is used by the pdcsi-node-windows Daemonset) to bypass all metadata restrictions, which compared to a user created SA has more restricted access to the metadata. I have confirmed this by creating a pod that uses the SA pdcsi-node-sa and was able to access the necessary disk data.

Hmm, super interesting, this isn't expected. Can you give details of your setup? gcloud compute instances describe as well as kubectl get -o yaml for both the node and the pd csi daemonset deployment you use with this PR?

Thx.

@mattcary
Copy link
Contributor

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jun 20, 2025
@k8s-ci-robot
Copy link
Contributor

@ivanLeont: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-gcp-compute-persistent-disk-csi-driver-e2e-windows-2022 a9a2d90 link false /test pull-gcp-compute-persistent-disk-csi-driver-e2e-windows-2022

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@ivanLeont
Copy link
Author

ivanLeont commented Jun 20, 2025

Adding a note here: This still works even if the GKE cluster has Workload Identity, as the Daemonset kube-system/gke-metadata-server that manages metadata access, seems to allow the SA pdcsi-node-sa (which is used by the pdcsi-node-windows Daemonset) to bypass all metadata restrictions, which compared to a user created SA has more restricted access to the metadata. I have confirmed this by creating a pod that uses the SA pdcsi-node-sa and was able to access the necessary disk data.

Hmm, super interesting, this isn't expected. Can you give details of your setup? gcloud compute instances describe as well as kubectl get -o yaml for both the node and the pd csi daemonset deployment you use with this PR?

Thx.

Here are the details of the setup:

gke-metadata-server.txt
cd4-node.txt
cluster-info.txt
csi-gce-pd-node-win.yaml.txt
pdcsi-node.yaml.txt

When deploying a cluster with Workload Identity enabled, i noticed that after running ./deploy-drivers.sh, the windows node images deployed by the DS csi-gce-pd-node-win were crashing with the following error message

│ I0620 20:06:02.254719    5792 main.go:122] Operating compute environment set to: production and computeEndpoint is set to: <nil>                                                                                                                                        │
│ I0620 20:06:02.255761    5792 main.go:131] Sys info: NumCPU: 4 MAXPROC: 1                                                                                                                                                                                               │
│ I0620 20:06:02.255761    5792 main.go:136] Driver vendor version a9a2d90593f36050b197c7576b2970e8420b99e9                                                                                                                                                               │
│ I0620 20:06:02.257324    5792 safe-mounter_windows.go:66] using CSIProxyMounterV1, API Versions Disk: v1, Filesystem: v1, Volume: v1                                                                                                                                    │
│ F0620 20:06:02.267481    5792 main.go:271] Failed to set up metadata service: failed to get machine-type: metadata: GCE metadata "instance/machine-type" not defined  

https://github.com/kubernetes-sigs/gcp-compute-persistent-disk-csi-driver/blob/master/cmd/gce-pd-csi-driver/main.go#L271

I then enabled the default CSI and noticed that the windows image there was not crashing at that point, so I created two pods in both namespaces with different SA's

This pod, metadata-test, deployed in ns gce-pd-csi-driver uses the sa csi-gce-pd-node-sa-win which is is created using the ./deploy-driver.sh (I added the required bindings and annotations to work with Workload Identity)

PS C:\> Invoke-RestMethod -Uri 'http://metadata.google.internal/computeMetadata/v1/instance' -Headers @{'Metadata-Flavor'='Google'}
attributes/
hostname
id
name
service-accounts/
zone


PS C:\> Invoke-RestMethod -Uri 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts' -Headers @{'Metadata-Flavor'='Google'}
default/
sa_number@project_id.iam.gserviceaccount.com/


PS C:\> 

In this pod, metadata-test, deployed in ns kube-system uses the sa pdcsi-node-sa which comes in by default when enabling the default CSI

PS C:\> Invoke-RestMethod -Uri 'http://metadata.google.internal/computeMetadata/v1/instance/' -Headers @{'Metadata-Flavor'='Google'}
attributes/
cpu-platform
credentials/
description
disks/
gce-workload-certificates/
guest-attributes/
hostname
id
image
licenses/
machine-type
maintenance-event
name
network-interfaces/
partner-attributes/
preempted
remaining-cpu-time
scheduling/
service-accounts/
shutdown-details/
tags
virtual-clock/
zone

PS C:\> Invoke-RestMethod -Uri 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts' -Headers @{'Metadata-Flavor'='Google'}
240471850237-compute@developer.gserviceaccount.com/
default/

PS C:\> 

So when I used a webhook to swap the original image in the DS kube-system/pdcsi-node-windows my custom image was able to work fine. It was also observed that in the gke-metadata-server DS, there is a field --passthrough-ksa-list that has kube-system:pdcsi-node-sa as one of the SA's allowd passthrough

Let me know if you need any more details! 😃

@mattcary
Copy link
Contributor

It was also observed that in the gke-metadata-server DS, there is a field --passthrough-ksa-list that has kube-system:pdcsi-node-sa as one of the SA's allowd passthrough

ahhh interesting we didn't know the security team did that :-P

Let us figure out if this is the right way to go, then

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. release-note-none Denotes a PR that doesn't merit a release note. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants