New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More documentation around vm-driver=none for local use #2575

Closed
jakefeasel opened this Issue Feb 24, 2018 · 40 comments

Comments

Projects
None yet
@jakefeasel
Contributor

jakefeasel commented Feb 24, 2018

Is this a BUG REPORT or FEATURE REQUEST? (choose one): FEATURE REQUEST

Please provide the following details:

Environment:

Minikube version (use minikube version): v0.25.0

  • OS (e.g. from /etc/os-release): "Ubuntu 16.04.3 LTS (Xenial Xerus)"
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): none
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): n/a
  • Install tools:
  • Others: Docker version 18.02.0-ce

What happened: I spent a long time struggling to get things working properly with vm-driver=none

What you expected to happen: Find documentation which provides coverage for this very valuable use-case.

Anything else do we need to know:

I have a linux laptop that I want to use k8s on. I do not want to run a VM, since that is an inefficient use of my laptop's memory (always either too much or too little memory allocated for the given docker containers I want to run) and slower due to the extra virtualization. Also it shouldn't be necessary - I have a Linux environment, why should I have to run another VM just to run Linux? That's why I bought this laptop.

The use of vm-driver=none is hardly documented at all. What little there is does not seem to consider the value for developer machines.

Yesterday I filed #2571, but this was actually not the full story. I had a red-herring with IP addresses which led me to believe that the docker0 interface was the root of the problem. It turns out, the real problem was whatever IP address I happened to have bound to my ethernet interface would be used in the construction of the cluster. If that IP address changed for any reason (as often happens on laptops) then the whole environment was inaccessible.

The workaround was not to specify a bridge IP for docker, as I had thought. Instead you need to start minikube like so:

minikube start --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost

And then go and edit ~/.kube/config, replacing the server IP that was detected from the main network interface with "localhost". For example, mine now looks like this:

- cluster:
    certificate-authority: /home/jfeasel/.minikube/ca.crt
    server: https://localhost:8443
  name: minikube

With this configuration, I can access my local cluster all of the time, even if the main network interface is disabled.

Also, we should note that it is required to have "socat" installed on the Linux environment. See this issue for details: kubernetes/kubernetes#19765 I saw this when I tried to use helm to connect to my local cluster; I got errors with port-forwarding. Since I'm using Ubuntu all I had to do was sudo apt-get install socat and then everything worked as expected.

@harpratap

This comment has been minimized.

harpratap commented Aug 3, 2018

@jakefeasel Kinda unrelated but do you know how to let minikube pull images I created locally? All the docs mention using minikube docker-env but it returns none in case of using docker driver.

@ncabatoff

This comment has been minimized.

ncabatoff commented Aug 5, 2018

@harpratap see #2443, your locally available images are also available to kubernetes with vm-driver=none.

@harpratap

This comment has been minimized.

harpratap commented Aug 6, 2018

@ncabatoff Do I need to specify some additional parameter when starting minkube to let it access those locally created images? I get this error -
Failed to pull image "mylocalnginx": rpc error: code = Unknown desc = Error response from daemon: pull access denied for mylocalnginx, repository does not exist or may require 'docker login'

@ncabatoff

This comment has been minimized.

ncabatoff commented Aug 6, 2018

Not that I'm aware of. Note that you don't even have to pull them: with vm-driver=none it's the same docker daemon as on your desktop, i.e. the one you've been populating when you build images. So just skip the pull you're doing now, reference the images normally in your container manifests, and it should just work. At least that's been my experience.

@harpratap

This comment has been minimized.

harpratap commented Aug 6, 2018

@ncabatoff Yes that is what I did and I got the image pull error, didn't do any manual pulling of images.

Edit: Nevermind, got it working by setting imagePullPolicy to Never. Thanks!

Edit Again: Don't change the imagePullPolicy, you just need to tag your local images with a version other than latest, so I just built my image mylocalnginx:v1 and it worked.

@bennettellis

This comment has been minimized.

Contributor

bennettellis commented Aug 20, 2018

Seems like the right place to add a couple of items that I've discovered along the way with vm-driver=none:

  1. minikube mount not supported
  2. minikube ssl not supported and doesn't seem to be any way to ssh to the minikube "node"?? This is particularly a problem when using something like Terraform file provisioner to move content to the node to share with pods. However, see item 3 below.
  3. mounting a host volume to a pod mounts it from the underlying host, not the "node" that is(?) minikube. Maybe put another way is that the host behind minikube is also in effect the node in some respects, at least when the vm-driver is none.
@bw2

This comment has been minimized.

bw2 commented Sep 9, 2018

Also having issues similar to https://brokenco.de/2018/09/04/minikube-vmdriver-none.html

@tstromberg

This comment has been minimized.

Collaborator

tstromberg commented Oct 9, 2018

Hey y'all. I'd appreciate your feedback on the pull request for an initial set of documentation for --vm-driver=none: #3240

Preview link: https://github.com/kubernetes/minikube/blob/22afb79a37436b3d98171dd09212f193fb6f45ca/docs/vmdriver-none.md

Thanks!

@akaihola

This comment has been minimized.

akaihola commented Oct 11, 2018

@jakefeasel, in your issue description could you fix the typo in

minikube start--vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost

by adding a space between start and --vm-driver=none? Currently copy-pasting the command line will output unknown command "start--vm-driver=none".

Also, I'm getting an error creating file at /usr/bin/kubeadm. Should I run the command as root?

@akaihola

This comment has been minimized.

akaihola commented Oct 11, 2018

See also:

@cdancy

This comment has been minimized.

cdancy commented Oct 11, 2018

@tstromberg can we standardize on either None or none?

@tstromberg

This comment has been minimized.

Collaborator

tstromberg commented Oct 11, 2018

@cdancy - done. Now with lowercase.

@cdancy

This comment has been minimized.

cdancy commented Oct 11, 2018

@jakefeasel

This comment has been minimized.

Contributor

jakefeasel commented Oct 11, 2018

It appears my whole premise was flawed, based on the content of this PR. However, it does strike me that a lot of people are similarly misguided, given the interest in this issue and the various blog posts and comments from people trying something similar. Is there any interest within the minikube team to intentionally support this use-case?

@edthorne

This comment has been minimized.

edthorne commented Oct 12, 2018

I'm also curious. Operating from a Linux VDI, I'm unable to run another layer of virtualization necessary to run a nested virtual machine. The CI/CD instructions have delivered me to the point that the cluster is running in Docker within the VDI. Some pods are running on the default bridge network, others are running on the default host network. This, I'm sure is the root of my dashboard and DNS issues since those pods are running on the bridge, while the remaining pods are running on the host.

Now that I've discovered @jakefeasel was the author of the note to specify a bridge network, I'd like to ask for some clarification on the note's meaning. Do I need to create a new bridge network for minikube? Are your start options above doing what your note intended with the note?

@jakefeasel

This comment has been minimized.

Contributor

jakefeasel commented Oct 12, 2018

@edthorne the note I added back then was probably wrong. I wouldn't put much stock into it. Instead, I would refer to @tstromberg 's PR.

@sbernheim

This comment has been minimized.

sbernheim commented Oct 15, 2018

I wrote a quickstart guide for running Minikube on Linux with --vm-driver=none in MD format. I'd be happy to share it back to the Minikube project and/or reformat it according to your project needs.

The current version is here but obviously I'd remove the section on installing Gestalt before contributing it.

@akaihola

This comment has been minimized.

akaihola commented Oct 16, 2018

@sbernheim, I tried your quickstart guide and made a few observations which could be addressed in the guide:

Existing /etc/kubernetes/

You should make sure you don't have an existing /etc/kubernetes/ directory. I did, and minikube start would fail complaining about existing .conf files with wrong CA certs.

User owned .kube/

The location and permissions of the .kube/ directory became more friendly if I did:

CHANGE_MINIKUBE_NONE_USER=true sudo -E minikube start --vm-driver=none

Minikube status output

For newbies, it would be helpful to show the expected output for sudo minikube status.

Host IP changes

Whenever my laptop IP changes, minikube stops working, and to get it up again, I need to

  • stop minikube
  • delete the minikube cluster
  • rm -rf /etc/kubernetes
  • start minikube
  • recreate all my namespaces and re-apply all my configurations

Is there a more light-weight method for adapting minikube to the IP change?

@slayerjain

This comment has been minimized.

slayerjain commented Oct 16, 2018

If I run minikube with --vm-driver=none, external DNS resolution doesn't work inside any containers in any of my pods. Is that normal?

@bennettellis

This comment has been minimized.

Contributor

bennettellis commented Oct 16, 2018

external DNS resolution works fine for me with --vm-driver=none On numerous occasions have had to download a package or pull some utility to do some diagnosis within the pod and had no problem getting to the external world.

@slayerjain

This comment has been minimized.

slayerjain commented Oct 16, 2018

@bennettellis Did it work out of the box? or you had to fiddle with the networking configurations? I used this to bootstrap:

sudo minikube start --extra-config=apiserver.service-node-port-range=80-32767 --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost
@bennettellis

This comment has been minimized.

Contributor

bennettellis commented Oct 16, 2018

For me, out of the box. I've only done this inside AWS Ubuntu 18 LTS ec2 instances as well as with a VirtualBox instance running that same Ubuntu 18 LTS OS . So out of the box is with those specific boxes and pretty much the default networking there (other than restricting ingress).

@slayerjain

This comment has been minimized.

slayerjain commented Oct 16, 2018

I tried this locally on a laptop. Anyone else facing the issue?

@bennettellis

This comment has been minimized.

Contributor

bennettellis commented Oct 16, 2018

@slayerjain the specific commands I used to install k8s and minikube:

apt-get -y update
apt-get -y upgrade
apt-get install -y docker.io unzip
service docker start
systemctl enable docker
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/v1.11.0/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
source <(kubectl completion bash)
echo "source <(kubectl completion bash)" >> ~/.bashrc
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 && chmod +x minikube && mv minikube /usr/local/bin/

once that was done, started up minikube with:

sudo minikube start --vm-driver=none --memory=8192
@bhack

This comment has been minimized.

bhack commented Oct 16, 2018

I need to pass this flag kubernetes/kubeadm#845 to kubelet related to systemd reslovconf. If not coredns pod crash.

@slayerjain

This comment has been minimized.

slayerjain commented Oct 16, 2018

@bhack Thanks! How do you pass the flag through minikube?

@bennettellis

This comment has been minimized.

Contributor

bennettellis commented Oct 16, 2018

@slayerjain OS detais for the host in question would help here.

also, this should probably be a new issue, since this is about documentation.

@bhack

This comment has been minimized.

bhack commented Oct 16, 2018

@slayerjain Yes the upstream flag was sent with Minikube. Currently Debian and Ubuntu with systemd resolvconf but probably also other Linux distro with systemd resolvconf.

@slayerjain

This comment has been minimized.

slayerjain commented Oct 16, 2018

I'm on Pop OS (Ubuntu 18.04 based). @bhack Do you mean something like this?

sudo minikube start --extra-config=apiserver.service-node-port-range=80-32767 --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost --kubernetes-version=v1.11.3
@bhack

This comment has been minimized.

bhack commented Oct 16, 2018

Yes

@slayerjain

This comment has been minimized.

slayerjain commented Oct 16, 2018

using this minikube doesn't really start on my system. I get this

sudo minikube start --extra-config=apiserver.service-node-port-range=80-32767 --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost --kubernetes-version=v1.11.3

[sudo] password for shubhamjain: 
Starting local Kubernetes v1.11.3 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
E1017 00:22:24.988553   12748 start.go:297] Error starting cluster:  kubeadm init error 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns
 running command: : running command: 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns

 output: [init] using Kubernetes version: v1.11.3
[preflight] running pre-flight checks
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING FileExisting-ebtables]: ebtables not found in system path
	[WARNING FileExisting-ethtool]: ethtool not found in system path
I1017 00:22:24.799394   12879 kernel_validator.go:81] Validating kernel version
I1017 00:22:24.799495   12879 kernel_validator.go:96] Validating kernel config
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.1-ce. Max validated version: 17.03
	[WARNING Hostname]: hostname "minikube" could not be reached
	[WARNING Hostname]: hostname "minikube" lookup minikube on 127.0.0.53:53: server misbehaving
	[WARNING Port-10250]: Port 10250 is in use
[preflight] Some fatal errors occurred:
	[ERROR FileExisting-crictl]: crictl not found in system path
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
: running command: 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns

.: exit status 2
@bhack

This comment has been minimized.

bhack commented Oct 16, 2018

Do you have /run/systemd/resolve/resolv.conf on the host?

@slayerjain

This comment has been minimized.

slayerjain commented Oct 16, 2018

on my laptop (host) - yes.

@bhack

This comment has been minimized.

bhack commented Oct 16, 2018

I think your problem is unrelated to that flag. Check #3150

@Yensan

This comment has been minimized.

Yensan commented Oct 19, 2018

@jakefeasel can't agree any more! It is painful to look up in Doc for vm-driver=none
I am comparing minikube start --vm-driver=none with microk8s. Does any suggestion for me?

@bennettellis

This comment has been minimized.

Contributor

bennettellis commented Oct 19, 2018

@slayerjain Also check #2707

@tstromberg

This comment has been minimized.

Collaborator

tstromberg commented Oct 19, 2018

Initial doc has been submitted: please send further PR's to improve this doc with any tips or tricks you might know of:

https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md

Thanks!

@slayerjain

This comment has been minimized.

slayerjain commented Oct 21, 2018

Interesting. Btw, I just noticed that Minikube starts by default on boot. Is there any way to disable that?

@sbernheim

This comment has been minimized.

sbernheim commented Oct 22, 2018

@akaihola - Thanks for the feedback! I've added a message to the top of the Installing Minikube on Linux guide referencing @tstromberg 's cautionary MD file to indicate that developers should not use the --vm-driver=none option on a laptop or PC running Linux, a section on checking for existing minikube and kubectl configuration files and binaries, and expected output for the minikube status command, as you suggested.

I'll need to test out that CHANGE_MINIKUBE_NONE_USER option to understand what it does, but if it makes sense to incorporate that into the guide I'll probably add it later on. It doesn't seem strictly necessary, but could make installation and management a but easier for the user in the long-run.

I'm not sure how you might adapt to an IP address change while running Minikube, but maybe try the minikube update-context command and see if that helps?

In my case, I'm running Linux within a VM rather than directly on my laptop, so the IP address doesn't tend to change while the VM is running. But it does change whenever I stop/start/restart the VM, and that command will reconnect local kubectl with the minikube cluster.

@RajashekarRajagopalan

This comment has been minimized.

RajashekarRajagopalan commented Oct 26, 2018

@harpratap can you tell where did you set the imagePullPolicy to Never?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment