Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Version 1.26 no longer auto-mounts VirtualBox shared folders #14465

Closed
ghost opened this issue Jun 29, 2022 · 35 comments
Closed

Version 1.26 no longer auto-mounts VirtualBox shared folders #14465

ghost opened this issue Jun 29, 2022 · 35 comments
Labels
area/guest-vm General configuration issues with the minikube guest VM co/virtualbox kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ghost
Copy link

ghost commented Jun 29, 2022

What Happened?

After upgrading from 1.25.2 to 1.26.0, shared folders from the VirtualBox driver are no longer auto-mounted. If this is expected behavior or requires new configs, I have not found any documentation to that effect.

Using:

  • Mac OSX Monterey 12.4
  • VirtualBox
  • minikube upgraded by brew

In versions prior to 1.26, the "Shared Folders" setup in VirtualBox with "automount" set to true would be auto-mounted to / on bootup of the host container. In 1.26 this no longer happens. minikube mount does work, but this is not convenient and would prefer to use the host drivers anyway instead of 9p.

Tried a full minikube delete and fresh minikube start, with no improvement.

The default setup for VirtualBox is to share the users home directory. In the past, minikube would automount this to /Users.

Adding more shared directories with "automount" would also result in them being automatically mounted in the root directory of the host container.

Note of this happens in 1.26.

Attach the log file

log.txt

Note log entries such as:

I0629 01:38:28.203993 32538 main.go:134] libmachine: COMMAND: /usr/local/bin/VBoxManage sharedfolder add minikube --name Users --hostpath /Users --automount
I0629 01:38:28.134845 32538 main.go:134] libmachine: COMMAND: /usr/local/bin/VBoxManage guestproperty set minikube /VirtualBox/GuestAdd/SharedFolders/MountPrefix /
I0629 01:38:28.168837 32538 main.go:134] libmachine: COMMAND: /usr/local/bin/VBoxManage guestproperty set minikube /VirtualBox/GuestAdd/SharedFolders/MountDir /
SharedFolderNameMachineMapping1="Users"
SharedFolderPathMachineMapping1="/Users"

These are expected and are supposed to result in auto mounting, but do not.

Operating System

macOS (Default)

Driver

VirtualBox

@ghost
Copy link
Author

ghost commented Jun 29, 2022

Downgrading back to 1.25.2 definitely fixes this problem.

@neffets
Copy link

neffets commented Jun 30, 2022

Same her on Ubuntu 22.04 with minikube 1.26

working is only on 1.24.0 and 1.25.2

@nwbt
Copy link

nwbt commented Jul 5, 2022

Same experience

OS: Mac OSX Monterey 12.3
VirtualBox: 6.1.34 r150636 (Qt5.6.3)

Downgrade to 1.25.2 worked as described (thanks)

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jul 5, 2022

There were no changes done to libmachine or the virtualbox driver, but it could be something with the kernel perhaps.

Try modprobe vboxsf

This also sounds related, the removal of the vbox-guest.service: (lowering the default 20 minute timesync)

-ExecStartPre=-/usr/sbin/modprobe vboxsf
-# Normally, VirtualBox only syncs every 20 minutes. This syncs on start, and
-# forces an immediate sync if VM time is over 5 seconds off.
-ExecStart=/usr/sbin/VBoxService -f --disable-automount --timesync-set-start --timesync-set-threshold 5000
-

commit 770d41f

CONFIG_* relaed to vbox were added to linux_x86_64_defconfig and as consequence
vbox related packages were removed since vbox modules are available in
upstream kernel.

@afbjorklund afbjorklund added kind/bug Categorizes issue or PR as related to a bug. co/virtualbox area/guest-vm General configuration issues with the minikube guest VM labels Jul 5, 2022
@vroetman
Copy link

vroetman commented Jul 7, 2022

Same problem on RHEL 8.6.

minikube ssh 'sudo mkdir -p /hosthome; sudo mount -t vboxsf hosthome /hosthome'

works, so the vboxsf drivers are working fine. It just doesn't mount automatically on start. The missing VBoxService may well be the problem.

@vroetman
Copy link

vroetman commented Aug 8, 2022

@eiffel-fl , are you aware of this issue? It seems likely a regression from 770d41f

@eiffel-fl
Copy link
Contributor

Hi.

Sorry, I am not aware of it as I am really not familiar with virtual box as I use qemu when I need to emulate/virtualize something.
But this is highly possible I was to aggresive removing things related to virtual box in my contribution...

Did you try to add back the above mentioned service?
If no, can you try?
Otherwise, just give me a complete reproducer and I try to solve it.

Best regards.

@eiffel-fl
Copy link
Contributor

minikube ssh 'sudo mkdir -p /hosthome; sudo mount -t vboxsf hosthome /hosthome'

I am trying to add back only the things which are necessary, for example we no more need to build the kernel modules as they are built with the kernel.
I think we only need VBoxService and mount.vboxsf.
I tried the command you wrote and I got the following:

$ minikube ssh 'sudo mkdir -p /hosthome; sudo mount -t vboxsf hosthome /hosthome
mount: /hosthome: unknown filesystem type 'vboxsf'.
ssh: Process exited with status 32
$ minikube ssh 'lsmod | grep vbox
vboxsf                 36864  0
vboxguest              40960  1 vboxsf

I think this is because we are lacking mount.vboxsf, can you please confirm @vroetman?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Aug 11, 2022

It was in that vbox-guest package you deleted

@eiffel-fl
Copy link
Contributor

It was in that vbox-guest package you deleted

This is my guess, but this comment stated the command works, which seems to not be the case.
I am building virtual box to understand the build process and I will write something to add what is needed.

@vroetman
Copy link

I was working on building the iso image yesterday so I could test this, but it took a while to figure out how to build it.
I just pulled down minikube v1.26.1 again to test. I can run that above command without issues. I am on RHEL-8 running VirtualBox 6.1.36r152435.

You can see here that it is mounted:

% minikube ssh mount | grep vboxsf
hosthome on /hosthome type vboxsf (rw,relatime)

But I don't see mount.vboxsf on the system.

%  minikube ssh cat /etc/os-release
NAME=Buildroot
VERSION=2021.02.12-1-g379d09f-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"

@eiffel-fl
Copy link
Contributor

% minikube ssh mount | grep vboxsf
hosthome on /hosthome type vboxsf (rw,relatime)

I am using the same version than you but I cannot use vboxsf:

$ minikube ssh
...
$ sudo mkdir -p /hosthome; sudo mount -t vboxsf hosthome /hosthome
mount: /hosthome: unknown filesystem type 'vboxsf'.
$ cat /etc/os-release 
NAME=Buildroot
VERSION=2021.02.12-1-g379d09f-dirty
ID=buildroot
VERSION_ID=2021.02.12
PRETTY_NAME="Buildroot 2021.02.12"

I am on RHEL-8 running VirtualBox 6.1.36r152435.

Can you please provide a complete reproducer (i.e. how you run minikube).
I will build an image and try to solve this problem.

@vroetman
Copy link

vroetman commented Aug 11, 2022

These are the steps I used:

% minikube version
minikube version: v1.26.1
commit: 1a5bb3e6850ca685fde7d4b07213e922680e9d2c-dirty

% minikube -p test start minikube --driver=virtualbox
😄  [test] minikube v1.26.1 on Redhat 8.6
✨  Using the virtualbox driver based on user configuration
👍  Starting control plane node test in cluster test
🔥  Creating virtualbox VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.24.3 on Docker 20.10.17 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🔎  Verifying Kubernetes components...
🌟  Enabled addons: default-storageclass, storage-provisioner
    ▪ Want kubectl v1.24.3? Try 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "test" cluster and "default" namespace by default

% minikube profile test
✅  minikube profile was successfully set to test

% minikube ssh ls /hosthome
ls: cannot access '/hosthome': No such file or directory
ssh: Process exited with status 2

% minikube ssh sudo mkdir /hosthome
% minikube ssh 'sudo mount -t vboxsf hosthome /hosthome'
% minikube ssh mount | grep vboxsf
hosthome on /hosthome type vboxsf (rw,relatime)

@eiffel-fl
Copy link
Contributor

I will install VirtualBox and try to reproduce.

commit: 1a5bb3e6850ca685fde7d4b07213e922680e9d2c-dirty

What does this commit reference?

@vroetman
Copy link

vroetman commented Aug 11, 2022

commit: 1a5bb3e6850ca685fde7d4b07213e922680e9d2c-dirty

What does this commit reference?

Oh, I got minikube from conda-forge, so that is from the the minikube-feedstock repo.
conda-forge/minikube-feedstock@1a5bb3e

% curl -sL https://github.com/kubernetes/minikube/archive/v1.26.1.tar.gz | sha256sum
71e56130ffaf6fd1c4b6777dc0b88935b8f2cbdb83fd8d2904e81e7cc1c48a60  -

@eiffel-fl
Copy link
Contributor

For strange reasons, I cannot boot minikube using virtualbox as a driver:

I0811 17:26:47.098320  159337 main.go:134] libmachine: COMMAND: /usr/bin/VBoxManage modifyvm test --natpf1 delete ssh
I0811 17:26:47.133853  159337 main.go:134] libmachine: STDOUT:
{
}
I0811 17:26:47.133880  159337 main.go:134] libmachine: STDERR:
{
VBoxManage: error: Code NS_ERROR_INVALID_ARG (0x80070057) - Invalid argument value (extended info not available)
VBoxManage: error: Context: "RemoveRedirect(Bstr(ValueUnion.psz).raw())" at line 1936 of file VBoxManageModifyVM.cpp

Anyway, I am polishing the buildroot recipe, I think it should do the trick.

@vroetman
Copy link

That is strange. Were you using a virtualbox driver when you were testing it earlier, or some other driver? vboxsf is the shared filesystem driver for VirtualBox, so it would then make sense if it didn't work with another driver.

@eiffel-fl
Copy link
Contributor

That is strange. Were you using a virtualbox driver when you were testing it earlier, or some other driver? vboxsf is the shared filesystem driver for VirtualBox, so it would then make sense if it didn't work with another driver.

I never used virtualbox, I only use qemu/kvm.
I began to write a new recipe but I will go with adding back the old one first and then I will polish it to make it cleaner (in a separate PR).
I will open a PR tomorrow which should fix the problem.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Aug 11, 2022

I never used virtualbox, I only use qemu/kvm.

Then why modify virtualbox ? Seems to be erroring here too.

VBoxManage: error: Code NS_ERROR_INVALID_ARG (0x80070057) - Invalid argument value (extended info not available)
VBoxManage: error: Context: "RemoveRedirect(Bstr(ValueUnion.psz).raw())" at line 1936 of file VBoxManageModifyVM.cpp

Possibly because it is no longer compatible with VirtualBox 6.1 ?

😄 minikube v1.26.1 on Ubuntu 20.04
VBoxManage: 6.1.34_Ubuntur150636


EDIT: It did boot though, and it did mount.

$ sudo mkdir /hosthome
$ sudo mount -t vboxsf hosthome /hosthome

So the binary is probably not needed ?

@eiffel-fl
Copy link
Contributor

eiffel-fl commented Aug 12, 2022

I never used virtualbox, I only use qemu/kvm.

Then why modify virtualbox ? Seems to be erroring here too.

VBoxManage: error: Code NS_ERROR_INVALID_ARG (0x80070057) - Invalid argument value (extended info not available)
VBoxManage: error: Context: "RemoveRedirect(Bstr(ValueUnion.psz).raw())" at line 1936 of file VBoxManageModifyVM.cpp

Possibly because it is no longer compatible with VirtualBox 6.1 ?

😄 minikube v1.26.1 on Ubuntu 20.04 VBoxManage: 6.1.34_Ubuntur150636

It did boot though, and it did mount.

$ sudo mkdir /hosthome
$ sudo mount -t vboxsf hosthome /hosthome

I did not understand your message.
At the beginning you said it did not boot then you were able to boot it?

So the binary is probably not needed ?

If the binary is not needed (which will make the fix easier, as it is only a matter of a service), what is the path which should be mounted?

@vroetman
Copy link

If the binary is not needed (which will make the fix easier, as it is only a matter of a service), what is the path which should be mounted?

I think the service is what is missing. The path is different depending on the host operating system (Linux, Mac, and Windows are all different). My little mount test is only showing that the vboxsf hosthome share is able to mount somewhere.

If you have a working branch, or make a PR, I can build the iso and test it.

@eiffel-fl
Copy link
Contributor

If the binary is not needed (which will make the fix easier, as it is only a matter of a service), what is the path which should be mounted?

I think the service is what is missing. The path is different depending on the host operating system (Linux, Mac, and Windows are all different). My little mount test is only showing that the vboxsf hosthome share is able to mount somewhere.

If you have a working branch, or make a PR, I can build the iso and test it.

What do you call by service: systemd service or VBoxService?
Because the recipe is not the same if I need to get the VBoxService (even though in a first time I will extract it from the .iso and not build it from sources).

@afbjorklund
Copy link
Collaborator

At the beginning you said it did not boot then you were able to boot it?

It gave an error so I thought it failed, but in the end the boot was successful after all.

Maybe it was there all along, did not compare with how it worked with the old ISO...

Sorry for the mixed messages.

@eiffel-fl
Copy link
Contributor

At the beginning you said it did not boot then you were able to boot it?

It gave an error so I thought it failed, but in the end the boot was successful after all.

Maybe it was there all along, did not compare with how it worked with the old ISO...

Sorry for the mixed messages.

No problem, thank you for the clarification.

@vroetman
Copy link

I built and tested the iso from the PR, and left some comments. It doesn't work yet, but I think we are moving in the right direction.

@vroetman
Copy link

What do you call by service: systemd service or VBoxService?
Because the recipe is not the same if I need to get the VBoxService (even though in a first time I will extract it from the .iso and not build it from sources).

The vboxservice.service file calls the VBoxService program.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 15, 2022
@eiffel-fl
Copy link
Contributor

/close by #14784.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 15, 2022
@mm0
Copy link

mm0 commented Dec 20, 2022

Same problem on RHEL 8.6.

minikube ssh 'sudo mkdir -p /hosthome; sudo mount -t vboxsf hosthome /hosthome'

works, so the vboxsf drivers are working fine. It just doesn't mount automatically on start. The missing VBoxService may well be the problem.

This is now fixed as of 1.28 @vroetman

Adding more shared directories with "automount" would also result in them being automatically mounted in the root directory of the host container.

@adam-olema I don't see how this would be possible considering that VBoxService was previously configured to use the --disable-automount flag

@ghost
Copy link
Author

ghost commented Dec 21, 2022

@adam-olema I don't see how this would be possible considering that VBoxService was previously configured to use the --disable-automount flag

Don't know why, but it did work in much earlier versions. In any case, it works fine now :)

@vroetman
Copy link

@adam-olema I don't see how this would be possible considering that VBoxService was previously configured to use the --disable-automount flag

I think minikube is explicitly doing the mounting in this particular case, not the VirtualBox automounter.

@vroetman
Copy link

@adam-olema, This is working for me now on RHEL-8. If it's working for you, I think you can close the issue.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 16, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/guest-vm General configuration issues with the minikube guest VM co/virtualbox kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants