Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

local blockdevice #83265

Closed
kfox1111 opened this issue Sep 28, 2019 · 16 comments · Fixed by #100894
Closed

local blockdevice #83265

kfox1111 opened this issue Sep 28, 2019 · 16 comments · Fixed by #100894
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@kfox1111
Copy link

I tried to use a local volume block device in minikube. Its trying to call losetup with the -j option and its failing as the losetup in minikube does not provide this option. This may effect other distro's as well. The end result is lock block devices don't work on the platform.

@kfox1111 kfox1111 added the kind/bug Categorizes issue or PR as related to a bug. label Sep 28, 2019
@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Sep 28, 2019
@kfox1111
Copy link
Author

/sig storage

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 28, 2019
@pohly
Copy link
Contributor

pohly commented Oct 30, 2019

Probably caused by this code:

// GetLoopDevice returns the full path to the loop device associated with the given path.
func (v VolumePathHandler) GetLoopDevice(path string) (string, error) {
_, err := os.Stat(path)
if os.IsNotExist(err) {
return "", errors.New(ErrDeviceNotFound)
}
if err != nil {
return "", fmt.Errorf("not attachable: %v", err)
}
args := []string{"-j", path}

@kfox1111
Copy link
Author

Did a strace on losetup -j. looks like the same info can be had by:
cat /sys/block/loop0/loop/backing_file

@irizzant
Copy link

Just had the same problem with Ceph cluster creation in minikube, the osd-prepare doesn't start because of this:

Events:
  Type     Reason           Age                 From               Message
  ----     ------           ----                ----               -------
  Normal   Scheduled        2m2s                default-scheduler  Successfully assigned rook-ceph/rook-ceph-osd-prepare-set1-0-data-sgpq7-j8tf9 to minikube
  Warning  FailedMapVolume  58s (x8 over 2m2s)  kubelet, minikube  MapVolume.MapBlockVolume failed for volume "local-pv-c84df416" : blkUtil.AttachFileDevice failed. globalMapPath:/var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pv-c84df416, podUID: a477949a-a571-47ef-8409-6f8a3404dc64: GetLoopDevice failed for path /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pv-c84df416/a477949a-a571-47ef-8409-6f8a3404dc64: losetup -j /var/lib/kubelet/plugins/kubernetes.io~local-volume/volumeDevices/local-pv-c84df416/a477949a-a571-47ef-8409-6f8a3404dc64 failed: exit status 1

@mwennrich
Copy link

FYI: https://github.com/kubernetes-csi/csi-driver-host-path has the same problem in minikube. Replacing the /sbin/losetup busybox link with a "real" losetup helped for now, but isn't a solution of course.

majst01 pushed a commit to metal-stack/csi-lvm that referenced this issue Feb 8, 2020
…ostPathVolumeSource (#8)

* return a PersistentVolume of type LocalVolumeSource instead of type HostPathVolumeSource
https://pkg.go.dev/k8s.io/api/core/v1?tab=doc#LocalVolumeSource

* adding a hint on how to work around a losetup issue with minikube (see kubernetes/kubernetes#83265 )

Fixes #3
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 30, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 30, 2020
@irizzant
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 30, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 28, 2020
@irizzant
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 29, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 27, 2020
@irizzant
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 27, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 25, 2021
@irizzant
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 25, 2021
@pohly
Copy link
Contributor

pohly commented Feb 26, 2021

/help

@k8s-ci-robot
Copy link
Contributor

@pohly:
This request has been marked as needing help from a contributor.

Please ensure the request meets the requirements listed here.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Feb 26, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants