Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Known QEMU2 Driver Issues #14146

Open
6 of 12 tasks
sharifelgamal opened this issue May 11, 2022 · 22 comments
Open
6 of 12 tasks

Known QEMU2 Driver Issues #14146

sharifelgamal opened this issue May 11, 2022 · 22 comments
Labels
co/qemu-driver QEMU related issues kind/improvement Categorizes issue or PR as related to improving upon a current feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@sharifelgamal
Copy link
Collaborator

sharifelgamal commented May 11, 2022

Now that #13639 is (soon to be) merged, we need a place to aggregate all the known issues so we can eventually graduate it from experimental to recommended driver.

@sharifelgamal sharifelgamal added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. kind/improvement Categorizes issue or PR as related to improving upon a current feature. qemu-driver labels May 11, 2022
@afbjorklund
Copy link
Collaborator

afbjorklund commented May 12, 2022

Networking currently for "user" (slirp) only. Proper networking on Mac is possible, by using the VDE and Socket drivers from Lima. But not implemented in driver yet - should be easy to do, though

The default network should still be supported (through ssh tunneling, like docker driver) but when using these networks the machine will get an IP and direct access from the host is possible.

https://wiki.qemu.org/Documentation/Networking

@sharifelgamal sharifelgamal self-assigned this May 12, 2022
@afbjorklund
Copy link
Collaborator

afbjorklund commented May 12, 2022

The current tunneling of the docker port is kinda gross, but it was blocked on upstream not allowing any changes...

i.e. the docker port was hardcoded to 2376, so currently it does some kind of bait-and-switch on the URL instead

Now when the driver is forked, should remove that workaround - and add the tunneling of the minikube 8443 port.

#13934 (comment)

@afbjorklund
Copy link
Collaborator

afbjorklund commented May 14, 2022

Naming: once the dust settles, the "qemu2" driver should be aliased as the "qemu" driver - similar to how the "kvm2" driver eventually overtook the original "kvm" driver. The forks are more an historical fact, and machine upstream is now dead.

Note: the "qemu" and "kvm" drivers are for docker-machine, while the "qemu2" and "kvm2" drivers are for minikube
The original drivers still work (with patches), but the project is not officially supported by Docker Inc. anymore (since 2019).

@afbjorklund
Copy link
Collaborator

afbjorklund commented May 14, 2022

The QEMU2 driver is supposed to work on all three platforms, with hardware acceleration, but it needs more testing.
On Linux it uses KVM, on macOS it uses HVF and on Windows it uses WHPX. The qemu parameter is -accel.

anders@ubuntu:~$ uname
Linux
anders@ubuntu:~$ arch
x86_64
anders@ubuntu:~$ qemu-system-x86_64 -accel help
Accelerators supported in QEMU binary:
tcg
kvm
anders@ubuntu:~$ qemu-system-aarch64 -accel help
Accelerators supported in QEMU binary:
tcg

The above output means that amd64 is accelerated, but that arm64 is emulated (through "TCG", similar to Rosetta).
It is possible to run the virtual machine in emulation, but it is not supported and it is much (10x) slower and has timeouts.

https://wiki.qemu.org/Documentation/TCG

@afbjorklund
Copy link
Collaborator

@sharifelgamal : issue with workaround for lots of memory: #14273

Should be possible to remove the workaround, on recent macOS

@mprimeaux
Copy link

mprimeaux commented Jun 18, 2022

Where can I find the kube-registry-proxy image that backs the registry add-on? This one?

UPDATE: I think I found the answer.

@mprimeaux
Copy link

mprimeaux commented Jun 18, 2022

I'm just familiarizing myself with the Minikube code base and would like to focus on one of the items in this issue (ARM supported registry).

Apologies up front if this has been answered but are there any reasons why we can't use this registry image given its support for various CPU architectures?

@sharifelgamal
Copy link
Collaborator Author

I'm just familiarizing myself with the Minikube code base and would like to focus on one of the items in this issue (ARM supported registry).

Apologies up front if this has been answered but are there any reasons why we can't use this registry image given its support for various CPU architectures?

So we do use the mutliarch registry dockerhub image for the registry addon. This issue is the OTHER image we use for the addon, namely gcr.io/google_containers/kube-registry-proxy, seen here. I'm not totally sure what it does, what I do know is that it hasn't been updated or maintained in quite some time. I think our best bet is figuring out why we use it and what we can use to replace it.

@mprimeaux
Copy link

mprimeaux commented Jun 21, 2022

Thanks, @sharifelgamal. Let me dig into it and report back with a way forward. On the surface, it appears to be a registry proxy. Think of it as a reactive cache load pattern for container images. Anyway, I'll come back with my findings.

UPDATED: Yeah, the kube-registry-proxy hasn't been updated since March 2020.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jun 22, 2022

Actually it hasn't been updated in 5 years. (it is abandoned)

└─<missing> Virtual Size: 181.6 MB Tags: nginx:1.11.8
  └─60dc18151daf Virtual Size: 188.3 MB Tags: gcr.io/google_containers/kube-registry-proxy:0.4

https://hub.docker.com/layers/nginx/library/nginx/1.11.8/images/sha256-a39777a1a4a6ec8a91c978ded905cca10e6b105ba650040e16c50b3e157272c3?context=explore

See #10780 (comment) and cluster/addons/registry/README.md

@mprimeaux
Copy link

mprimeaux commented Jul 12, 2022

Thanks @afbjorklund. Just getting back from a "deep rabbit hole" of Kubernetes "fun". I will review the above links.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 29, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 29, 2023
@mprimeaux
Copy link

Ping

@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels May 24, 2023
@mprimeaux
Copy link

mprimeaux commented May 24, 2023

Actually it hasn't been updated in 5 years. (it is abandoned)

└─<missing> Virtual Size: 181.6 MB Tags: nginx:1.11.8
  └─60dc18151daf Virtual Size: 188.3 MB Tags: gcr.io/google_containers/kube-registry-proxy:0.4

https://hub.docker.com/layers/nginx/library/nginx/1.11.8/images/sha256-a39777a1a4a6ec8a91c978ded905cca10e6b105ba650040e16c50b3e157272c3?context=explore

See #10780 (comment) and cluster/addons/registry/README.md

@afbjorklund I've had a chance to catch up on the links you referenced regarding the registry proxy. If I understand your preferred direction, it would be to have a registry add-on that supports TLS rather than resurrecting the previous localhost:5000 "hack"?

@mprimeaux
Copy link

mprimeaux commented May 24, 2023

Well, I spent about 30 mins and was able to get the registry proxy working on linux/arm64 (Apple M1 and M2) with the "hacky way".

arm64

This is it running on an M1 Ultra with the QEMU driver.

The registry proxy container image is temporarily hosted with the change being limited to two lines in /pkg/minikube/assets/addons.go.

Perhaps a way to go about this is in phases with this first phase being to simply ensure the registry proxy works "as is" with a manifest supporting multiple CPU architectures (amd64, arm64, etc). Please let me know your thoughts.

@mprimeaux
Copy link

mprimeaux commented May 25, 2023

Rather than using ttl.sh, I went ahead and published the existing kube registry proxy to docker hub at mprimeaux/kube-registry-proxy. The container image supports the following CPU architectures:

  • linux/amd64
  • linux/arm64
  • linux/s390x
  • linux/386
  • linux/arm/v7
  • linux/arm/v6

I will work on a GitHub actions pipeline to build the image and publish to the GHCR and am happy to commit it to the minikube repository as an interim step to a more permanent solution that doesn't involve depending on the localhost:5000 "hack". But, at least this first step would align with how minikube currently works with the other drivers.

I suppose, alternatively, the following command works with the current Minikube release:

minikube addons enable registry --images="KubeRegistryProxy=mprimeaux/kube-registry-proxy:latest"

@torenware
Copy link

I suppose, alternatively, the following command works with the current Minikube release:

minikube addons enable registry --images="KubeRegistryProxy=mprimeaux/kube-registry-proxy:latest"

@mprimeaux what does enabling the registry with this option do? If I do this, what are the steps to use the socket_vmnet driver with either amd64 or arm64 MacOS? I'll happily test this if I have a bit more info as to what to do.

@mprimeaux
Copy link

I believe this one can be checked?

@SALAH30
Copy link

SALAH30 commented Dec 8, 2023

When I run : minikube tunnel

I get:

❌ Exiting due to MK_UNIMPLEMENTED: minikube tunnel is not currently implemented with the builtin network on QEMU

The same issue with : minikube service --url

Does any one have similar issues ?

@caerulescens
Copy link

caerulescens commented Jan 28, 2024

@SALAH30 minikube tunnel does not work with qemu2 driver with builtin network; see this

@caerulescens
Copy link

caerulescens commented Jan 31, 2024

Here's another bug to add to the multi-node known issues; works for single node, breaks for multi-node.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/qemu-driver QEMU related issues kind/improvement Categorizes issue or PR as related to improving upon a current feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests

9 participants