Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow minikube to run in a non-default docker host #9463

Closed
sharifelgamal opened this issue Oct 13, 2020 · 7 comments · Fixed by #9510
Closed

Allow minikube to run in a non-default docker host #9463

sharifelgamal opened this issue Oct 13, 2020 · 7 comments · Fixed by #9510
Assignees
Labels
co/docker-driver Issues related to kubernetes in container kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@sharifelgamal
Copy link
Collaborator

docker-machine create --driver=virtualbox default
eval $(docker-machine env default

This is will create a docker host inside of vbox VM, which currently causes minikube to fail, if there is no docker daemon running directly on a user's machine.

@sharifelgamal sharifelgamal added kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. co/docker-driver Issues related to kubernetes in container labels Oct 13, 2020
@medyagh
Copy link
Member

medyagh commented Oct 13, 2020

in the design there should be mindful of minikube docker-env

medya@~ $ minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.64.4:2376"
export DOCKER_CERT_PATH="/Users/medya/.minikube/certs"
export MINIKUBE_ACTIVE_DOCKERD="minikube"

# To point your shell to minikube's docker-daemon, run:
# eval $(minikube -p minikube docker-env)

  • if MINIKUBE_ACTIVE_DOCKERD is set, it indicates that use is in a shell with active docker-env
  • we should add new ENVs that captures the existing user's own DOCKER_HOST related vars if not default.
    it should be recoreded in given to docker-env command

something like this

medya@~ $ minikube docker-env
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.64.4:2376"
export DOCKER_CERT_PATH="/Users/medya/.minikube/certs"
export MINIKUBE_ACTIVE_DOCKERD="minikube"

export MMINIKUBE_EXISTING_DOCKER_TLS_VERIFY="..."
export MMINIKUBE_EXISTING_DOCKER_HOST="...."
export MMINIKUBE_EXISTING_DOCKER_CERT_PATH="...."

@ilya-zuyev
Copy link
Contributor

/assign

@afbjorklund
Copy link
Collaborator

afbjorklund commented Oct 17, 2020

Why is this something that we want to support ? Why not just use our minikube VM, instead of the boot2docker VM ?

minikube start --driver=virtualbox
eval $(minikube docker-env)

At the very least, one would have to use --virtualbox-cpu-count=2 --virtualbox-memory=2048 to double the defaults...

⛔ Exiting due to RSRC_INSUFFICIENT_CORES: has less than 2 CPUs available, but Kubernetes requires at least 2 to be available

⛔ Requested memory allocation (985MB) is less than the recommended minimum 1907MB. Deployments may fail.

Note that Docker Machine is unmaintained and unsupported, and we have already forked libmachine for minikube

@afbjorklund
Copy link
Collaborator

This (using Docker Machine) has a lot of the same problems as using Podman Remote: #8003

Like for instance those ports we publish, they are not from localhost but from the boot2docker VM

So the 127.0.0.1 will need to be replaced with $(docker-machine ip default)

Same problem with any bind mounted volumes (-v), they will also be from the VM

@afbjorklund
Copy link
Collaborator

afbjorklund commented Oct 22, 2020

Seems to be doable (in theory), otherwise.

$ docker-machine create --driver=virtualbox --virtualbox-cpu-count=2  --virtualbox-memory=2048 default
Running pre-create checks...
Creating machine...
[...]
Docker is up and running!
To see how to connect your Docker Client to the Docker Engine running on this virtual machine, run: docker-machine env default

At least when running minikube locally on it:

$ docker-machine ssh
   ( '>')
  /) TC (\   Core is distributed with ABSOLUTELY NO WARRANTY.
 (/-_--_-\)           www.tinycorelinux.net
docker@default:~$ minikube start                                                                                                                                                                          
😄  minikube v1.14.0 on Boot2docker 19.03.12 (vbox/amd64)
✨  Automatically selected the docker driver

🧯  The requested memory allocation of 1993MiB does not leave room for system overhead (total system memory: 1993MiB). You may face stability issues.
💡  Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1993mb'

👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=1993MB) ...
🐳  Preparing Kubernetes v1.19.2 on Docker 19.03.8 ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass
💡  kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "minikube" by default
docker@default:~$ 
docker@default:~$ df -h                                                                                                                                                                                   
Filesystem                Size      Used Available Use% Mounted on
tmpfs                     1.8G    328.4M      1.4G  18% /
/dev/sda1                17.8G      2.5G     14.4G  15% /mnt/sda1
docker@default:~$ free -m                                                                                                                                                                                 
              total        used        free      shared  buff/cache   available
Mem:           1993         649          30         366        1313        1011
Swap:          1417           6        1410

Only listens on 127.0.0.1 though:

$ eval $(docker-machine env default)
$ docker ps
CONTAINER ID        IMAGE                                 COMMAND                  CREATED             STATUS              PORTS                                                                                                      NAMES
9ecf1ba19ee4        gcr.io/k8s-minikube/kicbase:v0.0.13   "/usr/local/bin/entr…"   10 seconds ago       Up 10 seconds        127.0.0.1:32771->22/tcp, 127.0.0.1:32770->2376/tcp, 127.0.0.1:32769->5000/tcp, 127.0.0.1:32768->8443/tcp   minikube

So not accessible from the host ?

@afbjorklund
Copy link
Collaborator

To document this for others with the same question:

Why is this something that we want to support ? Why not just use our minikube VM, instead of the boot2docker VM ?

The end goal here is to have something similar to Docker Desktop, with a long-running VM for running the containers on...

So the VM would already be running when minikube start is run, but with an empty docker daemon or podman service.

This is similar to the "generic" driver, but with the added isolation of creating a container vs. running directly (bare metal machine)

i.e. generic is similar to none (but running remote), while this docker host would be similar to docker driver (but running remote)

none: run kubeadm directly on localhost
generic: run kubeadm remotely over ssh

docker (unix/fd): run container (runc) directly on localhost
docker (tcp/ssh): run container (runc) remotely over ssh

Never mind the interim layers between docker and runc, such as dockerd and containerd, those are only messengers.

The "podman" driver is very similar, except it only runs the daemon for remote connections - just the "conmon" for local.

@afbjorklund
Copy link
Collaborator

We could use the same Vagrant setup for testing this remote driver, as for testing the "generic" one ?

Just have to add the provisioner, which basically just called curl -sSL https://get.docker.com | sh -

Nowadays it is in the packages repos, so can just do "apt install docker.io" or "yum install podman"

Examples here: https://boot2podman.github.io/2020/07/22/machine-replacement.html

You can still get it from the vendor of course, if you want a newer version than what's in the distribution...

There's no script for podman, just a readme: https://podman.io/getting-started/installation#linux-distributions

@ilya-zuyev ilya-zuyev added this to the v1.16.0-candidate milestone Nov 10, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants