-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make the available drivers section a bit clearer #9803
Comments
@afbjorklund I'd like to work on this. |
@Aut0R3V : sure thing, please make a PR |
Hey @Aut0R3V are you still working on this? |
@afbjorklund is this still needed or we did that already ? |
It's still the same blanket statement: https://minikube.sigs.k8s.io/docs/start/
It doesn't say when you would use one over the other, and doesn't mention bare metal. |
Will try to do a better grouping as part of the work for the Kubecon presentation.
And hopefully it can get a better graphic presentation later on, similar to #10353 |
Running Docker and Podman in a VM on Linux is a bit of an experiment now, but would be included same as "Docker Desktop". That is, the overall setup is the same: 1) start a VM 2) Run engine on VM 3) Create node in engine on VM 4) Connect remote It's not clear when you would prefer this "double isolation", over the regular setup of just running kubeadm on the actual VM ? One reason would be to have the VM running in the background, to get a faster turnaround time by handling privileged containers. But you could get a similar effect by having multiple VMs and doing suspend/resume on them, even if not handled by minikube yet. And we still don't recommend running un-isolated on the laptop, the bare metal solution is supposed to have a dedicated node... Traditional use cases:
This is the "traditional" approach, using the same virtualization "API" on all platforms and with a controlled distribution. Cons: x86 only (need other hypervisor setup on ARM)
This is the "modern" approach, if you already have the Docker VM idling in the background but you don't want to manage it. Cons: bad networking and limited control, for non-Linux
This is a "lightweight" approach, that is being used for instance in a CI setup where you want to avoid nested virtualization. Cons: terminal only (without extra setup, like tunneling) |
I think it would also help if I finished the network diagrams: #4938 |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-contributor-experience at kubernetes/community. |
This is still probably something we want to do. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
This is probably still something we want to do. Clearer documentation can only help. |
Currently it is a bit unclear what you actually need:
This mixes containers and virtual machines etc.
We need to make some kind of distinction between local drivers Docker and Podman, and remote drivers Docker and Podman.
And try to explain how it is related to the hypervisors, Hyperkit, Hyper-V, KVM (Libvirt), Parallels, VMware and the
--vm=true
Beyond separating Docker Engine and Docker Desktop.
The text was updated successfully, but these errors were encountered: