Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should we deprecate and remove the in-tree volume plugin hostpath dynamic provisioning feature? #123804

Open
2 tasks
carlory opened this issue Mar 7, 2024 · 15 comments
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@carlory
Copy link
Member

carlory commented Mar 7, 2024

The in-tree volume plugin hostpath supports dynamic provisioning a volume for a claim when the kube-controller-manager starts with --enable-hostpath-provisioner=true.

It creates a local /tmp/%/%s directory as a new PersistentVolume, default /tmp/hostpath_pv/%s. It is meant for development and testing only and WILL NOT WORK in a multi-node cluster.

There are 3 problems I want to talk about:

  1. e2e tests don't have a test case for hostpath dynamic provisioning.
    type hostPathDriver struct {
    var _ storageframework.TestDriver = &hostPathDriver{}
    var _ storageframework.PreprovisionedVolumeTestDriver = &hostPathDriver{}
    var _ storageframework.InlineVolumeTestDriver = &hostPathDriver{}
    the hostPathDriver doesn't implement the DynamicPVTestDriver interface, so the e2e tests don't have a test case for in-tree hostpath dynamic provisioning.
  2. The cluster created by kubeadm doesn't make the feature work as expected. because the kube-controller-manager pod doesn't have a hostPath volume mounted at /tmp/hostpath_pv.
    // HostPath volumes for the controller manager
    // HostPath volumes for the controller manager
    // Read-only mount for the certificates directory
    // TODO: Always mount the K8s Certificates directory to a static path inside of the container
    mounts.NewHostPathMount(kubeadmconstants.KubeControllerManager, kubeadmconstants.KubeCertificatesVolumeName, cfg.CertificatesDir, cfg.CertificatesDir, true, &hostPathDirectoryOrCreate)
    // Read-only mount for the ca certs (/etc/ssl/certs) directory
    mounts.NewHostPathMount(kubeadmconstants.KubeControllerManager, caCertsVolumeName, caCertsVolumePath, caCertsVolumePath, true, &hostPathDirectoryOrCreate)
    // Read-only mount for the controller manager kubeconfig file
    controllerManagerKubeConfigFile := filepath.Join(kubeadmconstants.KubernetesDir, kubeadmconstants.ControllerManagerKubeConfigFileName)
    mounts.NewHostPathMount(kubeadmconstants.KubeControllerManager, kubeadmconstants.KubeConfigVolumeName, controllerManagerKubeConfigFile, controllerManagerKubeConfigFile, true, &hostPathFileOrCreate)
    // Mount for the flexvolume directory (/usr/libexec/kubernetes/kubelet-plugins/volume/exec by default)
    // Flexvolume dir must NOT be readonly as it is used for third-party plugins to integrate with their storage backends via unix domain socket.
    flexvolumeDirVolumePath, idx := kubeadmapi.GetArgValue(cfg.ControllerManager.ExtraArgs, "flex-volume-plugin-dir", -1)
    if idx == -1 {
      flexvolumeDirVolumePath = defaultFlexvolumeDirVolumePath
    }
    mounts.NewHostPathMount(kubeadmconstants.KubeControllerManager, flexvolumeDirVolumeName, flexvolumeDirVolumePath, flexvolumeDirVolumePath, false, &hostPathDirectoryOrCreate)
  3. There are lots of projects using --enable-hostpath-provisioner=true. Please see https://cs.k8s.io/?q=enable-hostpath-provisioner&i=nope&files=&excludeFiles=&repos=. But they use kubeadm to create a cluster, so the dynamic provisioning of in-tree hostpath is never used by those projects. I don't know why they enable the flag.

There are 3 in-tree plugins that support dynamic provisioning: hostpath, rbd (removed in 1.31), and portworxVolume (will be removed in a future release once its csi migration is completed).

Now, the community has various CSI drivers users can use for dynamic provisioning. So let us deprecate and remove the in-tree volume plugin hostpath dynamic provisioning feature if the feature is never used.

What do you think?

/cc @xing-yang @jsafrane @pacoxu @neolit123

/sig storage
/kind bug
/area kubeadm

related-to: add some e2e test for in-tree volume plugin to verify HonorPVReclaimPolicy

Tasks

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. kind/bug Categorizes issue or PR as related to a bug. area/kubeadm needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Mar 7, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@carlory
Copy link
Member Author

carlory commented Mar 7, 2024

@neolit123
Copy link
Member

The cluster created by kubeadm doesn't make the feature work as expected. because the kube-controller-manager pod doesn't have a hostPath volume mounted at /tmp/hostpath_pv.

yes, but kubeadm allows you to pass clusterconfiguration.controllerManager.extraVolumes
so i don't think there is a need for a kubeadm change here to comply with the feature.

it's also important to note that not everyone is using kubeadm.

@neolit123
Copy link
Member

neolit123 commented Mar 8, 2024

what i'm interested in doing is removing the flex volume support from kubeadm. seems to me the feature in k8s core is stale and there was a plan to remove it completely.

// Mount for the flexvolume directory (/usr/libexec/kubernetes/kubelet-plugins/volume/exec by default)
// Flexvolume dir must NOT be readonly as it is used for third-party plugins to integrate with their storage backends via unix domain socket.
flexvolumeDirVolumePath, idx := kubeadmapi.GetArgValue(cfg.ControllerManager.ExtraArgs, "flex-volume-plugin-dir", -1)
if idx == -1 {
  flexvolumeDirVolumePath = defaultFlexvolumeDirVolumePath
}
mounts.NewHostPathMount(kubeadmconstants.KubeControllerManager, flexvolumeDirVolumeName, flexvolumeDirVolumePath, flexvolumeDirVolumePath, false, &hostPathDirectoryOrCreate)

@carlory
Copy link
Member Author

carlory commented Mar 8, 2024

The flexVolume is deprecated but still avaliable and no plan to remove support. because some users still may use the volume plugin. some discussions can be found in #122071 (comment) and the volume doc is updated by kubernetes/website#44657

@carlory
Copy link
Member Author

carlory commented Mar 8, 2024

yes, but kubeadm allows you to pass clusterconfiguration.controllerManager.extraVolumes
so i don't think there is a need for a kubeadm change here to comply with the feature.

You're right. I haven't seen that before. Unfortunately, those manfiests in https://cs.k8s.io/?q=enable-hostpath-provisioner&i=nope&files=&excludeFiles=&repos= don't have extra volumes when --enable-hostpath-provisioner=true.

An example is https://github.com/kubernetes/kubeadm/blob/main/kinder/pkg/kubeadm/config.go#L136-L143 @neolit123

@neolit123
Copy link
Member

neolit123 commented Mar 8, 2024

You're right. I haven't seen that before. Unfortunately, those manfiests in https://cs.k8s.io/?q=enable-hostpath-provisioner&i=nope&files=&excludeFiles=&repos= don't have extra volumes when --enable-hostpath-provisioner=true.

external tools (or users) that use kubeadm can add the extravolumes if they need to.

An example is https://github.com/kubernetes/kubeadm/blob/main/kinder/pkg/kubeadm/config.go#L136-L143 @neolit123

this is something kinder copied from kind:
https://github.com/kubernetes-sigs/kind/blob/c83316d25e1c40f1c66982b637edf3ee34a5e0d4/pkg/cluster/internal/kubeadm/config.go#L204

i think it can be removed from kinder because we don't need it.

it was added to kind here:
kubernetes-sigs/kind#397

kind creates a default host-path StorageClass but cannot create
PersistentVolumes because the enable-hostpath-provisioner flag is not
set on kube-controller-manager.

@joejulian @BenTheElder

e2e tests don't have a test case for hostpath dynamic provisioning.

if the feature should be kept a test must be added for it.
up to sig storage.

@neolit123
Copy link
Member

/remove-area kubeadm

@carlory
Copy link
Member Author

carlory commented Mar 8, 2024

/cc @msau42

@jsafrane
Copy link
Member

jsafrane commented Mar 8, 2024

I am for removing the hostpath provisioner.

For the record, it can be still used in hack/local-up-cluster.sh, so we would need to remove it from there too:

ENABLE_HOSTPATH_PROVISIONER=${ENABLE_HOSTPATH_PROVISIONER:-"false"}

Officially, we would need to mark the --enable-hostpath-provisioner option in KCM as deprecated with a release note, send deprecation notices and wait for a few (2?) releases.

@carlory
Copy link
Member Author

carlory commented Mar 10, 2024

@jsafrane Can we merge #123841 in 1.30?

@BenTheElder
Copy link
Member

kind's default provisioner these days is "implementing PVs" using https://github.com/rancher/local-path-provisioner so I think we can drop this, we can version-gate disabling it in kind on some future k8s release to let users transition if there are any still.

For the record, it can be still used in hack/local-up-cluster.sh, so we would need to remove it from there too:

cc @dims

I don't think we should just deprecate out a feature like this without:

  1. A KEP, to capture consensus from the wider project
  2. Actually searching for usage and exploring impact

@BenTheElder
Copy link
Member

Consider for example non-kubernetes-repo users: https://github.com/search?q=enable-hostpath-provisioner&type=code

@carlory
Copy link
Member Author

carlory commented Mar 14, 2024

A KEP, to capture consensus from the wider project

@BenTheElder @jsafrane add a KEP in kubernetes/enhancements#4548. Can you help review it?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

9 participants