Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube startup hangs on "Starting VM..." #199

Closed
cyrille-leclerc opened this issue Feb 5, 2018 · 3 comments
Closed

minikube startup hangs on "Starting VM..." #199

cyrille-leclerc opened this issue Feb 5, 2018 · 3 comments

Comments

@cyrille-leclerc
Copy link

cyrille-leclerc commented Feb 5, 2018

Problem Description

minikube start hangs on "Starting VM...".

When adding verbose logs, minikube start -v=10 hangs on Error dialing TCP: dial tcp 192.168.64.7:22: getsockopt: operation timed out.

See detailed logs below.

Workaround

  • Kill the minikube start command
  • Verify that there are multiple entries name=minikube in /private/var/db/dhcpd_leases`
  • Backup dhcpd_leases: sudo cp /private/var/db/dhcpd_leases /private/var/db/dhcpd_leases.save
  • Edit dhcpd_leases and delete all the name=minikube entries except the last entry (maybe all the entries can be deleted)
  • Start minikube: minikube start -v=10
  • If startup fails with Temporary Error: Could not find an IP address for *:*:*:*:*:*, then follow workaround https://github.com/jenkins-x/jx#minkube-and-hyperkit-could-not-find-an-ip-address
  • Verify that Kubernetes successfully started up with minikube status

minikube start hang detailed logs

$ minikube start -v=10

Aliases:
map[string]string{}
Override:
map[string]interface {}{"v":"10"}
PFlags:
map[string]viper.FlagValue{"bootstrapper":viper.pflagValue{flag:(*pflag.Flag)(0xc420164780)}, "feature-gates":viper.pflagValue{flag:(*pflag.Flag)(0xc420211040)}, "memory":viper.pflagValue{flag:(*pflag.Flag)(0xc4202103c0)}, "cache-images":viper.pflagValue{flag:(*pflag.Flag)(0xc4202110e0)}, "disable-driver-mounts":viper.pflagValue{flag:(*pflag.Flag)(0xc420210140)}, "extra-config":viper.pflagValue{flag:(*pflag.Flag)(0xc420211180)}, "xhyve-disk-driver":viper.pflagValue{flag:(*pflag.Flag)(0xc420210780)}, "mount-string":viper.pflagValue{flag:(*pflag.Flag)(0xc4202100a0)}, "profile":viper.pflagValue{flag:(*pflag.Flag)(0xc420164320)}, "cpus":viper.pflagValue{flag:(*pflag.Flag)(0xc420210460)}, "disk-size":viper.pflagValue{flag:(*pflag.Flag)(0xc420210500)}, "iso-url":viper.pflagValue{flag:(*pflag.Flag)(0xc4202101e0)}, "kubernetes-version":viper.pflagValue{flag:(*pflag.Flag)(0xc420210e60)}, "mount":viper.pflagValue{flag:(*pflag.Flag)(0xc4200b3b80)}, "network-plugin":viper.pflagValue{flag:(*pflag.Flag)(0xc420210fa0)}, "nfs-share":viper.pflagValue{flag:(*pflag.Flag)(0xc420210820)}, "docker-env":viper.pflagValue{flag:(*pflag.Flag)(0xc420210960)}, "docker-opt":viper.pflagValue{flag:(*pflag.Flag)(0xc420210a00)}, "host-only-cidr":viper.pflagValue{flag:(*pflag.Flag)(0xc4202105a0)}, "insecure-registry":viper.pflagValue{flag:(*pflag.Flag)(0xc420210d20)}, "kvm-network":viper.pflagValue{flag:(*pflag.Flag)(0xc4202106e0)}, "nfs-shares-root":viper.pflagValue{flag:(*pflag.Flag)(0xc4202108c0)}, "registry-mirror":viper.pflagValue{flag:(*pflag.Flag)(0xc420210dc0)}, "apiserver-names":viper.pflagValue{flag:(*pflag.Flag)(0xc420210b40)}, "dns-domain":viper.pflagValue{flag:(*pflag.Flag)(0xc420210c80)}, "hyperv-virtual-switch":viper.pflagValue{flag:(*pflag.Flag)(0xc420210640)}, "keep-context":viper.pflagValue{flag:(*pflag.Flag)(0xc4200b2aa0)}, "vm-driver":viper.pflagValue{flag:(*pflag.Flag)(0xc420210320)}, "apiserver-ips":viper.pflagValue{flag:(*pflag.Flag)(0xc420210be0)}, "apiserver-name":viper.pflagValue{flag:(*pflag.Flag)(0xc420210aa0)}, "container-runtime":viper.pflagValue{flag:(*pflag.Flag)(0xc420210f00)}}
Env:
map[string]string{}
Key/Value Store:
map[string]interface {}{}
Config:
map[string]interface {}{"ingress":true}
Defaults:
map[string]interface {}{"log_dir":"", "wantupdatenotification":true, "reminderwaitperiodinhours":24, "wantreporterror":false, "v":"0", "alsologtostderr":"false", "wantreporterrorprompt":true, "wantkubectldownloadmsg":true, "wantnonedriverwarning":true, "showdriverdeprecationnotification":true}
Starting local Kubernetes v1.9.0 cluster...
Starting VM...
Found binary path at /usr/local/bin/docker-machine-driver-hyperkit
Launching plugin server for driver hyperkit
Plugin server listening at address 127.0.0.1:60298
() Calling .GetVersion
Using API Version  1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .GetState
(minikube) Calling .Start
(minikube) Using UUID 029e61c2-072c-11e8-9058-784f4386325d
(minikube) Generated MAC 3e:31:91:e5:fa:f9
(minikube) Starting with cmdline: loglevel=3 user=docker console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes base host=minikube
(minikube) Calling .GetConfigRaw
(minikube) Calling .DriverName
Waiting for SSH to be available...
Getting to WaitForSSH function...
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x1437850] 0x1437800  [] 0s} 192.168.64.7 22 <nil> <nil>}
About to run SSH command:
exit 0
Error dialing TCP: dial tcp 192.168.64.7:22: getsockopt: operation timed out

Sample /private/var/db/dhcpd_leases with duplicate name=minikube entries

{
	name=minikube
	ip_address=192.168.64.7
	hw_address=1,3e:31:91:e5:fa:f9
	identifier=1,3e:31:91:e5:fa:f9
	lease=0x5a759bb8
}
{
	name=minikube
	ip_address=192.168.64.6
	hw_address=1,a:ed:cc:a3:ec:23
	identifier=1,a:ed:cc:a3:ec:23
	lease=0x5a73595b
}
{
	name=minikube
	ip_address=192.168.64.5
	hw_address=1,b2:28:de:4d:c4:81
	identifier=1,b2:28:de:4d:c4:81
	lease=0x5a73483d
}
{
	name=minikube
	ip_address=192.168.64.4
	hw_address=1,1a:14:23:fa:30:4f
	identifier=1,1a:14:23:fa:30:4f
	lease=0x5a72e150
}
{
	name=minikube
	ip_address=192.168.64.3
	hw_address=1,f6:77:5:d1:af:31
	identifier=1,f6:77:5:d1:af:31
	lease=0x5a7247f9
}
{
	name=minikube
	ip_address=192.168.64.2
	hw_address=1,1e:18:2b:6:2e:e7
	identifier=1,1e:18:2b:6:2e:e7
	lease=0x5a722f5e
}
@cyrille-leclerc
Copy link
Author

Close as we have a workaround. This cause seem to be a fragility of minikube

@M-usamashahid
Copy link

ANY SOLUTION I AM ALSO STUCK ON SAME ISSUE

@rawlingsj
Copy link
Member

@M-usamashahid how are you installing it? We have some docs here that might help https://jenkins-x.io/getting-started/create-cluster/. FWIW we try and recommend people not use minikube as there tends to be lots of issues with networking and installation. If possible using GKE is the best experience plus there's a free tier you can use to kick the tyres. There's also a tutorial https://jenkins-x.io/about/tutorials/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants