Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PersistentVolumeLabel admission blocks PVs on Azure + vSphere #124504

Open
jsafrane opened this issue Apr 24, 2024 · 3 comments · May be fixed by #124505
Open

PersistentVolumeLabel admission blocks PVs on Azure + vSphere #124504

jsafrane opened this issue Apr 24, 2024 · 3 comments · May be fixed by #124505
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@jsafrane
Copy link
Member

jsafrane commented Apr 24, 2024

What happened?

In 1.30, the PersistentVolumeLabel admission plugin blocks creation of in-tree AzureDisk and vSphere PVs:

persistentvolumes "pvc-2d9b3999-f60d-4797-923f-210f1f75025d" is forbidden:
error querying AzureDisk volume pvc-2d9b3999-f60d-4797-923f-210f1f75025d:
unable to build Azure cloud provider for AzureDisk

The reason is that the cloud provider was removed (#122857), but the admission plugin still calls GetCloudProvider there, which fails:

cloudProvider, err := cloudprovider.GetCloudProvider("azure", cloudConfigReader)

What did you expect to happen?

The PV should be admitted + created.

How can we reproduce it (as minimally and precisely as possible)?

Enable PersistentVolumeLabel admission plugin. For example, when running local-up-cluster.sh:

ALLOW_PRIVILEGED=true ENABLE_ADMISSION_PLUGINS="NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,Priority,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction,PersistentVolumeLabel" bash -x hack/local-up-cluster.sh

Create AzureDisk PV on any 1.30 cluster, you don't need Azure cloud:

kubectl create -f <<EOF
apiVersion: v1
kind: PersistentVolume
metadata:
  name: myvol
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 5Gi
  azureDisk:
    diskName: test2.vhd
    diskURI: https://someacount.blob.core.windows.net/vhds/test2.vhd
EOF

Anything else we need to know?

/sig storage
/priority important-soon

Kubernetes version

Today's master

$ kubectl version
Server Version: v1.31.0-alpha.0.24+9791f0d1f39f3f

Cloud provider

Azure

OS version

No response

Install tools

local-up-cluster.sh

Container runtime (CRI) and version (if applicable)

Related plugins (CNI, CSI, ...) and versions (if applicable)

@jsafrane jsafrane added the kind/bug Categorizes issue or PR as related to a bug. label Apr 24, 2024
@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Apr 24, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jsafrane jsafrane linked a pull request Apr 24, 2024 that will close this issue
@jsafrane
Copy link
Member Author

jsafrane commented Apr 24, 2024

I am fixing the worst bug in #124505 and I plan to backport it to 1.30.

However, we should consider removing the whole PersistentVolumeLabel admission plugin. Most of the cloud providers were removed, only GCE remains and IMO even that one will be removed sooner or later. The admission plugin has no way to get volume zone + region and is therefore useless.

The admission said (in 1.26, where we had all in-tree cloud providers):

// DEPRECATED: in a future release, we will use mutating admission webhooks to apply PV labels.
// Once the mutating admission webhook is used for AWS, Azure, and GCE,
// this admission controller will be removed.

Such webhook never materialized for AWS, Azure nor vSphere and I am not aware of any plans for GCE.

@jsafrane
Copy link
Member Author

jsafrane commented May 7, 2024

Such webhook never materialized for AWS, Azure nor vSphere and I am not aware of any plans for GCE.

It did materialize in https://github.com/kubernetes-sigs/cloud-pv-admission-labeler

So removing it from k/k is the best way forward.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants