Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make the available drivers section a bit clearer #9803

Open
afbjorklund opened this issue Nov 29, 2020 · 14 comments
Open

Make the available drivers section a bit clearer #9803

afbjorklund opened this issue Nov 29, 2020 · 14 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@afbjorklund
Copy link
Collaborator

afbjorklund commented Nov 29, 2020

Currently it is a bit unclear what you actually need:

Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMWare

This mixes containers and virtual machines etc.

We need to make some kind of distinction between local drivers Docker and Podman, and remote drivers Docker and Podman.

And try to explain how it is related to the hypervisors, Hyperkit, Hyper-V, KVM (Libvirt), Parallels, VMware and the --vm=true

Beyond separating Docker Engine and Docker Desktop.

@afbjorklund afbjorklund added kind/documentation Categorizes issue or PR as related to documentation. priority/backlog Higher priority than priority/awaiting-more-evidence. labels Nov 29, 2020
@Aut0R3V
Copy link
Contributor

Aut0R3V commented Dec 12, 2020

@afbjorklund I'd like to work on this.
/assign

@afbjorklund afbjorklund added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Dec 12, 2020
@afbjorklund
Copy link
Collaborator Author

@Aut0R3V : sure thing, please make a PR

@priyawadhwa
Copy link

Hey @Aut0R3V are you still working on this?

@medyagh
Copy link
Member

medyagh commented Feb 27, 2021

@afbjorklund is this still needed or we did that already ?

@afbjorklund
Copy link
Collaborator Author

It's still the same blanket statement: https://minikube.sigs.k8s.io/docs/start/

All you need is Docker (or similarly compatible) container or a Virtual Machine environment, and Kubernetes is a single command away: minikube start

Container or virtual machine manager, such as: Docker, Hyperkit, Hyper-V, KVM, Parallels, Podman, VirtualBox, or VMWare

It doesn't say when you would use one over the other, and doesn't mention bare metal.

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Feb 27, 2021

Will try to do a better grouping as part of the work for the Kubecon presentation.

  • Bare Metal
    • Running locally in VM: none/native
    • Running remotely in VM: generic/ssh
  • Container
    • Running in container: docker/podman (linux)
    • Running in container in VM: docker desktop (mac, win), eventually also podman desktop
  • Hypervisor
    • Native virtualization: xhyve/hyper-v/libvirt (including the hyperkit and kvm2 drivers)
    • Add-on virtualization: virtualbox (sort of including the parallels and vmware drivers)

And hopefully it can get a better graphic presentation later on, similar to #10353

@afbjorklund afbjorklund assigned afbjorklund and unassigned Aut0R3V Feb 27, 2021
@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Feb 27, 2021

Running Docker and Podman in a VM on Linux is a bit of an experiment now, but would be included same as "Docker Desktop".

That is, the overall setup is the same: 1) start a VM 2) Run engine on VM 3) Create node in engine on VM 4) Connect remote

It's not clear when you would prefer this "double isolation", over the regular setup of just running kubeadm on the actual VM ?

One reason would be to have the VM running in the background, to get a faster turnaround time by handling privileged containers.

But you could get a similar effect by having multiple VMs and doing suspend/resume on them, even if not handled by minikube yet.

And we still don't recommend running un-isolated on the laptop, the bare metal solution is supposed to have a dedicated node...


Traditional use cases:

  1. Run the minikube OS in a VirtualBox VM (Virtualization)

This is the "traditional" approach, using the same virtualization "API" on all platforms and with a controlled distribution.
For instance the setup that is being used in the Kubernetes course, https://www.edx.org/course/introduction-to-kubernetes

Cons: x86 only (need other hypervisor setup on ARM)

  1. Run the ubuntu OS in a container on Docker (Containerization)

This is the "modern" approach, if you already have the Docker VM idling in the background but you don't want to manage it.
Currently comes HighlyPreferred, and on Linux it would not even need a VM but run everything in a privileged container.

Cons: bad networking and limited control, for non-Linux

  1. Run the none driver as root in a dedicated VM (Bare Metal)

This is a "lightweight" approach, that is being used for instance in a CI setup where you want to avoid nested virtualization.
It also being used for the VM in the interactive Katacoda environment, https://kubernetes.io/docs/tutorials/hello-minikube/

Cons: terminal only (without extra setup, like tunneling)

@afbjorklund
Copy link
Collaborator Author

I think it would also help if I finished the network diagrams: #4938

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 28, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jun 27, 2021
@sharifelgamal sharifelgamal removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 30, 2021
@sharifelgamal
Copy link
Collaborator

This is still probably something we want to do.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 28, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 28, 2021
@sharifelgamal sharifelgamal added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Nov 3, 2021
@sharifelgamal
Copy link
Collaborator

This is probably still something we want to do. Clearer documentation can only help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

8 participants