Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tarball.Write panic: runtime error: invalid memory address or nil pointer dereference #6147

Closed
Sora233 opened this issue Dec 21, 2019 · 21 comments · Fixed by #6236
Closed

tarball.Write panic: runtime error: invalid memory address or nil pointer dereference #6147

Sora233 opened this issue Dec 21, 2019 · 21 comments · Fixed by #6236
Labels
ev/panic issues which contain a panic in minikube kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@Sora233
Copy link

Sora233 commented Dec 21, 2019

The exact command to reproduce the issue:
minikube start --vm-driver=virtualbox (Administrator)

The full output of the command that failed:

  • Microsoft Windows 10 Pro 10.0.18362 Build 18362 上的 minikube v1.6.2
  • Selecting 'virtualbox' driver from user configuration (alternates: [hyperv])
  • 正在创建 virtualbox 虚拟机(CPUs=2,Memory=2000MB, Disk=20000MB)...
    panic: runtime error: invalid memory address or nil pointer dereference
    [signal 0xc0000005 code=0x0 addr=0x28 pc=0x12ee976]

goroutine 103 [running]:
github.com/google/go-containerregistry/pkg/v1/tarball.Write(0x0, 0xc0003b4030, 0xa, 0xc0003b403b, 0xe, 0xc0003b404a, 0x7, 0x0, 0x0, 0xc00062bc58, ...)
/go/pkg/mod/github.com/google/go-containerregistry@v0.0.0-20180731221751-697ee0b3d46e/pkg/v1/tarball/write.go:57 +0x136
k8s.io/minikube/pkg/minikube/machine.CacheImage(0xc0003b4030, 0x21, 0xc0004b41e0, 0x46, 0x0, 0x0)
/app/pkg/minikube/machine/cache_images.go:395 +0x615
k8s.io/minikube/pkg/minikube/machine.CacheImages.func1(0xc0004aff68, 0x0)
/app/pkg/minikube/machine/cache_images.go:85 +0xed
golang.org/x/sync/errgroup.(*Group).Go.func1(0xc00033db90, 0xc00033dbf0)
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57 +0x6b
created by golang.org/x/sync/errgroup.(*Group).Go
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:54 +0x6d
panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x28 pc=0x12ee976]

goroutine 106 [running]:
github.com/google/go-containerregistry/pkg/v1/tarball.Write(0x0, 0xc00055e300, 0xa, 0xc00055e30b, 0x7, 0xc00055e313, 0x5, 0x0, 0x0, 0xc000699c58, ...)
/go/pkg/mod/github.com/google/go-containerregistry@v0.0.0-20180731221751-697ee0b3d46e/pkg/v1/tarball/write.go:57 +0x136
k8s.io/minikube/pkg/minikube/machine.CacheImage(0xc00055e300, 0x18, 0xc000598180, 0x3d, 0x0, 0x0)
/app/pkg/minikube/machine/cache_images.go:395 +0x615
k8s.io/minikube/pkg/minikube/machine.CacheImages.func1(0xc0004adf68, 0x0)
/app/pkg/minikube/machine/cache_images.go:85 +0xed
golang.org/x/sync/errgroup.(*Group).Go.func1(0xc00033db90, 0xc00033dc80)
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57 +0x6b
created by golang.org/x/sync/errgroup.(*Group).Go
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:54 +0x6d

The output of the minikube logs command:


X command runner: getting ssh client for bootstrapper: Error dialing tcp via ssh client: dial tcp 127.0.0.1:22: connectex: No connection could be made because the target machine actively refused it.

The operating system version:
Windows 10 Pro 1903
VirtualBox 6.1.0 platform packages with VirtualBox 6.1.0 Oracle VM VirtualBox Extension Pack
Hyper-V Requirements: 虚拟机监视器模式扩展: 是
固件中已启用虚拟化: 是
二级地址转换: 是
数据执行保护可用: 是

@guhuajun
Copy link

guhuajun commented Dec 24, 2019

I can confirm this issue on my test box with minikube 1.6.2, Windows 10 1903 and VirtualBox 6.0.14 installed.

$ minikube_start_vbox.bat                                                                                                                                                        
                                                                                                                                                                                 
minikube start -v 3 --vm-driver virtualbox     --cpus 4 --memory 4096     --docker-env HTTP_PROXY=http://192.168.0.12:1080     --docker-env HTTPS_PROXY=http://192.168.0.12:1080 
--docker-env NO_PROXY=192.168.99.100                                                                                                                                             
* Microsoft Windows 10 Pro 10.0.18362 Build 18362 上的 minikube v1.6.2                                                                                                             
* Selecting 'virtualbox' driver from user configuration (alternates: [hyperv])                                                                                                   
* 正在创建 virtualbox 虚拟机(CPUs=4,Memory=4096MB, Disk=20000MB)...                                                                                                                     
panic: runtime error: invalid memory address or nil pointer dereference                                                                                                          
[signal 0xc0000005 code=0x0 addr=0x28 pc=0x12ee976]                                                                       
                                                                                                                                                                                 
goroutine 30 [running]:                                                                                                                                                          
github.com/google/go-containerregistry/pkg/v1/tarball.Write(0x0, 0xc000201ae0, 0xa, 0xc000201aeb, 0x4, 0xc000201af0, 0x7, 0x0, 0x0, 0xc0002afc58, ...)                           
        /go/pkg/mod/github.com/google/go-containerregistry@v0.0.0-20180731221751-697ee0b3d46e/pkg/v1/tarball/write.go:57 +0x136                                                  
k8s.io/minikube/pkg/minikube/machine.CacheImage(0xc000201ae0, 0x17, 0xc0006ae300, 0x3e, 0x0, 0x0)                                                                                
        /app/pkg/minikube/machine/cache_images.go:395 +0x615                                                                                                                     
k8s.io/minikube/pkg/minikube/machine.CacheImages.func1(0xc0006a3f68, 0x0)                                                                                                        
        /app/pkg/minikube/machine/cache_images.go:85 +0xed                                                                                                                       
golang.org/x/sync/errgroup.(*Group).Go.func1(0xc000452a20, 0xc000452b40)                                                                                                         
        /go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57 +0x6b                                                                           
created by golang.org/x/sync/errgroup.(*Group).Go                                                                                                                                
        /go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:54 +0x6d                                                                           

@ericsyh
Copy link

ericsyh commented Jan 2, 2020

I got the same issue, detailed info as below:

The exact command to reproduce the issue:

minikube start

The full output of the command that failed:

* Selecting 'hyperv' driver from user configuration (alternates: [])                                                    * 正在创建 hyperv 虚拟机(CPUs=2,Memory=2000MB, Disk=20000MB)...                                                      panic: runtime error: invalid memory address or nil pointer dereference
[signal 0xc0000005 code=0x0 addr=0x28 pc=0x12ee976]

The output of the minikube logs command:

E0102 17:47:48.589060    7472 logs.go:175] Failed to list containers for "kube-apiserver": docker ListContainers. : docker ps -a --filter=name=k8s_kube-apiserver --format="{{.ID}}": Process exited with status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0102 17:47:48.613377    7472 logs.go:175] Failed to list containers for "coredns": docker ListContainers. : docker ps -a --filter=name=k8s_coredns --format="{{.ID}}": Process exited with status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0102 17:47:48.636388    7472 logs.go:175] Failed to list containers for "kube-scheduler": docker ListContainers. : docker ps -a --filter=name=k8s_kube-scheduler --format="{{.ID}}": Process exited with status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0102 17:47:48.660377    7472 logs.go:175] Failed to list containers for "kube-proxy": docker ListContainers. : docker ps -a --filter=name=k8s_kube-proxy --format="{{.ID}}": Process exited with status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0102 17:47:48.681376    7472 logs.go:175] Failed to list containers for "kube-addon-manager": docker ListContainers. : docker ps -a --filter=name=k8s_kube-addon-manager --format="{{.ID}}": Process exited with status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0102 17:47:48.703396    7472 logs.go:175] Failed to list containers for "kubernetes-dashboard": docker ListContainers. : docker ps -a --filter=name=k8s_kubernetes-dashboard --format="{{.ID}}": Process exited with status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0102 17:47:48.725378    7472 logs.go:175] Failed to list containers for "storage-provisioner": docker ListContainers. : docker ps -a --filter=name=k8s_storage-provisioner --format="{{.ID}}": Process exited with status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
E0102 17:47:48.747378    7472 logs.go:175] Failed to list containers for "kube-controller-manager": docker ListContainers. : docker ps -a --filter=name=k8s_kube-controller-manager --format="{{.ID}}": Process exited with status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
* ==> Docker <==
* -- Logs begin at Thu 2020-01-02 09:36:38 UTC, end at Thu 2020-01-02 17:36:36 UTC. --
* -- No entries --
*
* ==> container status <==
E0102 17:47:50.785127    7472 logs.go:153] command /bin/bash -c "sudo crictl ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo crictl ps -a || sudo docker ps -a": Process exited with status 1
stdout:

stderr:
time="2020-01-02T09:47:50Z" level=fatal msg="failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded"
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
 output: "\n** stderr ** \ntime=\"2020-01-02T09:47:50Z\" level=fatal msg=\"failed to connect: failed to connect, make sure you are running as root and the runtime has been started: context deadline exceeded\"\nCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?\n\n** /stderr **"
*
* ==> dmesg <==
* [Jan 2 09:36] You have booted with nomodeset. This means your GPU drivers are DISABLED
* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
* [  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
* [  +0.027870] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
* [  +0.027812] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
* [  +0.008983] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
*               * this clock source is slow. Consider trying other clock sources
* [ +22.133526] Unstable clock detected, switching default tracing clock to "global"
*               If you want to keep using the local clock, then add:
*                 "trace_clock=local"
*               on the kernel command line
* [  +0.000019] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
* [  +0.504337] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
* [  +0.694093] systemd[1]: Failed to bump fs.file-max, ignoring: Invalid argument
* [  +0.008300] systemd-fstab-generator[1222]: Ignoring "noauto" for root device
* [  +0.003293] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:12 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
* [  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
* [  +3.774974] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
* [  +0.295750] vboxguest: loading out-of-tree module taints kernel.
* [  +0.004303] vboxguest: PCI device not found, probably running on physical hardware.
* [Jan 2 09:38] NFSD: Unable to end grace period: -110
*
* ==> kernel <==
*  09:47:50 up 11 min,  0 users,  load average: 0.00, 0.02, 0.01
* Linux minikube 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
* PRETTY_NAME="Buildroot 2019.02.7"
*
* ==> kubelet <==
* -- Logs begin at Thu 2020-01-02 09:36:38 UTC, end at Thu 2020-01-02 17:36:36 UTC. --
* -- No entries --
*
X 获取 machine logs 时出错: unable to fetch logs for: container status

The operating system version:

  • Windows 10 Pro 1909

@DeepLJH0001
Copy link

the same problem

@NicoKam
Copy link

NicoKam commented Jan 7, 2020

Same problem on my mac

@tstromberg tstromberg changed the title panic: runtime error when run "minikube start" on Windows10 tarball.Write panic: runtime error: invalid memory address or nil pointer dereference Jan 8, 2020
@tstromberg tstromberg added this to the v1.7.0 milestone Jan 8, 2020
@tstromberg tstromberg added ev/panic issues which contain a panic in minikube kind/bug Categorizes issue or PR as related to a bug. labels Jan 8, 2020
@tstromberg
Copy link
Contributor

The line in our code is:

err = tarball.Write(tag, img, &tarball.WriteOptions{}, f)

The line in tarball.Write is:

cfgName, err := img.ConfigName()

This says to me that img is probably nil in this case. I'll add a nil check in to avoid the panic. In the mean time, could someone provide the output of:

minikube start -v=1 --alsologtostderr

The additional logging would help you figure out which image is failing to be looked up.

@tstromberg tstromberg added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Jan 8, 2020
@tstromberg
Copy link
Contributor

If anyone runs into this, could you confirm if this binary fixes it:

https://storage.googleapis.com/minikube-builds//6236/minikube-darwin-amd64

It would be nice to see the output of minikube start with it regardless.

@ericsyh
Copy link

ericsyh commented Jan 9, 2020

@tstromberg The binary should be https://storage.googleapis.com/minikube-builds/6236/minikube-darwin-amd64. Pls update it :)
BTW, could you help also build a binary on Windows ?

@ericsyh
Copy link

ericsyh commented Jan 10, 2020

@tstromberg I made a test on your binary but has no luck to fix the problem.
PTAL the output of the test.

The current binary

Minikube version:

minikube version: v1.6.2
commit: 54f28ac5d3a815d1196cd5d57d707439ee4bb392

The full output of the command that failed:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x4ecd6af]

goroutine 13 [running]:
github.com/google/go-containerregistry/pkg/v1/tarball.Write(0x0, 0xc000772080, 0xa, 0xc00077208b, 0xa, 0xc000772096, 0x7, 0x0, 0x0, 0xc0007a9c68, ...)
	/private/tmp/minikube-20191220-84512-5zg1fl/.brew_home/go/pkg/mod/github.com/google/go-containerregistry@v0.0.0-20180731221751-697ee0b3d46e/pkg/v1/tarball/write.go:57 +0x12f
k8s.io/minikube/pkg/minikube/machine.CacheImage(0xc000772080, 0x1d, 0xc0005571d0, 0x43, 0x0, 0x0)
	/private/tmp/minikube-20191220-84512-5zg1fl/pkg/minikube/machine/cache_images.go:395 +0x5df
k8s.io/minikube/pkg/minikube/machine.CacheImages.func1(0xc0002e3768, 0x0)
	/private/tmp/minikube-20191220-84512-5zg1fl/pkg/minikube/machine/cache_images.go:85 +0x124
golang.org/x/sync/errgroup.(*Group).Go.func1(0xc000776120, 0xc000776150)
	/private/tmp/minikube-20191220-84512-5zg1fl/.brew_home/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57 +0x64
created by golang.org/x/sync/errgroup.(*Group).Go
	/pr

The output of the minikube logs command:

command runner: getting ssh client for bootstrapper: Error creating new ssh host from driver: Error getting ssh host name for driver: Host is not running

6236 build binary

Minikube version:

minikube version: v1.6.2
commit: 6f2eb27c77016122b74589ab5a02d77771123d61

The full output of the command that failed:

😄  Darwin 10.13.6 上的 minikube v1.6.2
✨  从用户配置中选择 vmware' 驱动程序(可选:[hyperkit vmwarefusion docker])
🔥  正在创建 vmware 虚拟机(CPUs=2,Memory=4096MB, Disk=20000MB)...
E0110 14:01:23.619903   39290 cache.go:60] save image to file "k8s.gcr.io/coredns:1.6.5" -> "/Users/ericsyh/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5" failed: nil image for k8s.gcr.io/coredns:1.6.5: Get https://k8s.gcr.io/v2/: dial tcp 108.177.125.82:443: i/o timeout
E0110 14:01:23.623207   39290 cache.go:60] save image to file "k8s.gcr.io/kube-apiserver:v1.17.0" -> "/Users/ericsyh/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0" failed: nil image for k8s.gcr.io/kube-apiserver:v1.17.0: Get https://k8s.gcr.io/v2/: dial tcp 108.177.125.82:443: i/o timeout
E0110 14:01:23.619902   39290 cache.go:60] save image to file "k8s.gcr.io/kube-proxy:v1.17.0" -> "/Users/ericsyh/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.0" failed: nil image for k8s.gcr.io/kube-proxy:v1.17.0: Get https://k8s.gcr.io/v2/: dial tcp 108.177.125.82:443: i/o timeout
E0110 14:01:23.624650   39290 cache.go:60] save image to file "k8s.gcr.io/kube-controller-manager:v1.17.0" -> "/Users/ericsyh/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.0" failed: nil image for k8s.gcr.io/kube-controller-manager:v1.17.0: Get https://k8s.gcr.io/v2/: dial tcp 108.177.125.82:443: i/o timeout
E0110 14:01:23.626047   39290 cache.go:60] save image to file "k8s.gcr.io/pause:3.1" -> "/Users/ericsyh/.minikube/cache/images/k8s.gcr.io/pause_3.1" failed: nil image for k8s.gcr.io/pause:3.1: Get https://k8s.gcr.io/v2/: dial tcp 108.177.125.82:443: i/o timeout
E0110 14:01:23.626997   39290 cache.go:60] save image to file "k8s.gcr.io/kube-scheduler:v1.17.0" -> "/Users/ericsyh/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.0" failed: nil image for k8s.gcr.io/kube-scheduler:v1.17.0: Get https://k8s.gcr.io/v2/: dial tcp 108.177.125.82:443: i/o timeout
E0110 14:01:23.627775   39290 cache.go:60] save image to file "k8s.gcr.io/etcd:3.4.3-0" -> "/Users/ericsyh/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0" failed: nil image for k8s.gcr.io/etcd:3.4.3-0: Get https://k8s.gcr.io/v2/: dial tcp 108.177.125.82:443: i/o timeout
E0110 14:01:23.628323   39290 cache.go:60] save image to file "gcr.io/k8s-minikube/storage-provisioner:v1.8.1" -> "/Users/ericsyh/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1" failed: nil image for gcr.io/k8s-minikube/storage-provisioner:v1.8.1: Get https://gcr.io/v2/: dial tcp 108.177.97.82:443: i/o timeout
E0110 14:01:23.629745   39290 cache.go:60] save image to file "k8s.gcr.io/kube-addon-manager:v9.0.2" -> "/Users/ericsyh/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2" failed: nil image for k8s.gcr.io/kube-addon-manager:v9.0.2: Get https://k8s.gcr.io/v2/: dial tcp 108.177.125.82:443: i/o timeout

💣  无法启动虚拟机。可能的话请检查后执行 'minikube delete': create: Error creating machine: Error in driver during machine creation: Machine didn't return an IP after 120 seconds, aborting

The output of the minikube logs command:

command runner: getting ssh client for bootstrapper: Error creating new ssh host from driver: Error getting ssh host name for driver: Host is not running

@saberblue
Copy link

@ericsyh env HTTP_PROXY HTTPS_PROXY

@NoSugarCoffee
Copy link

minikube start -v=1 --alsologtostderr

➜ ~ minikube start -v=1 --alsologtostderr
I0117 10:41:36.910403 16536 notify.go:125] Checking for updates...
I0117 10:41:37.384089 16536 start.go:255] hostinfo: {"hostname":"dlldeMacBook-Pro.local","uptime":607356,"bootTime":1578621541,"procs":447,"os":"darwin","platform":"darwin","platformFamily":"","platformVersion":"10.14.6","kernelVersion":"18.7.0","virtualizationSystem":"","virtualizationRole":"","hostid":"982f17b3-0252-37fb-9869-88b3b1c77335"}
W0117 10:41:37.384231 16536 start.go:263] gopshost.Virtualization returned error: not implemented yet
😄 minikube v1.6.2 on Darwin 10.14.6
I0117 10:41:37.385712 16536 start.go:555] selectDriver: flag="", old=&{minikube false false https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso 2000 2 20000 virtualbox docker [] [] [] [] 192.168.99.1/24 default qemu:///system false false [] false [] /nfsshares false false true {v1.17.0 8443 minikube minikubeCA [] [] cluster.local docker 10.96.0.0/12 [] true false} virtio virtio}
I0117 10:41:37.386305 16536 global.go:60] Querying for installed drivers using PATH=/Users/dll/.minikube/bin:/Users/dll/prj/goPrj/bin:/usr/local/protobuf/bin:/Users/dll/opt/phabricator/arcanist/bin/:/Applications/IntelliJ IDEA.app/Contents/plugins/maven/lib/maven3/bin:/Library/Frameworks/Python.framework/Versions/3.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/go/bin
I0117 10:41:37.412192 16536 global.go:68] hyperkit priority: 6, state: {Installed:true Healthy:true Error:the installed hyperkit version (0.20180403) is older than the minimum recommended version (0.20190201) Fix:Run 'brew upgrade hyperkit' Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/hyperkit/}
I0117 10:41:37.412350 16536 global.go:68] parallels priority: 5, state: {Installed:false Healthy:false Error:exec: "docker-machine-driver-parallels": executable file not found in $PATH Fix:Install docker-machine-driver-parallels Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/parallels/}
I0117 10:41:37.472455 16536 global.go:68] virtualbox priority: 4, state: {Installed:true Healthy:true Error: Fix: Doc:}
I0117 10:41:37.472622 16536 global.go:68] vmware priority: 5, state: {Installed:false Healthy:false Error:exec: "docker-machine-driver-vmware": executable file not found in $PATH Fix:Install docker-machine-driver-vmware Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmware/}
I0117 10:41:37.473128 16536 global.go:68] vmwarefusion priority: 3, state: {Installed:false Healthy:false Error:exec: "vmrun": executable file not found in $PATH
vmrun path check
k8s.io/minikube/pkg/minikube/registry/drvs/vmwarefusion.status
/app/pkg/minikube/registry/drvs/vmwarefusion/vmwarefusion.go:63
k8s.io/minikube/pkg/minikube/registry.Available
/app/pkg/minikube/registry/global.go:67
k8s.io/minikube/pkg/minikube/driver.Choices
/app/pkg/minikube/driver/driver.go:98
k8s.io/minikube/cmd/minikube/cmd.selectDriver
/app/cmd/minikube/cmd/start.go:556
k8s.io/minikube/cmd/minikube/cmd.runStart
/app/cmd/minikube/cmd/start.go:296
github.com/spf13/cobra.(*Command).execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:830
github.com/spf13/cobra.(*Command).ExecuteC
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:914
github.com/spf13/cobra.(*Command).Execute
/go/pkg/mod/github.com/spf13/cobra@v0.0.5/command.go:864
k8s.io/minikube/cmd/minikube/cmd.Execute
/app/cmd/minikube/cmd/root.go:106
main.main
/app/cmd/minikube/main.go:66
runtime.main
/usr/local/go/src/runtime/proc.go:203
runtime.goexit
/usr/local/go/src/runtime/asm_amd64.s:1357 Fix:Install VMWare Fusion Doc:https://minikube.sigs.k8s.io/docs/reference/drivers/vmwarefusion/}
I0117 10:41:37.473475 16536 driver.go:109] requested: ""
I0117 10:41:37.473489 16536 driver.go:132] "hyperkit" has a higher priority (6) than "" (0)
I0117 10:41:37.473501 16536 driver.go:146] Picked: hyperkit
I0117 10:41:37.473512 16536 driver.go:147] Alternatives: [virtualbox]
I0117 10:41:37.473741 16536 driver.go:109] requested: "virtualbox"
I0117 10:41:37.473757 16536 driver.go:132] "hyperkit" has a higher priority (6) than "" (0)
I0117 10:41:37.473767 16536 driver.go:113] choosing "virtualbox" because it was requested
I0117 10:41:37.473774 16536 driver.go:146] Picked: virtualbox
I0117 10:41:37.473781 16536 driver.go:147] Alternatives: [hyperkit]
✨ Selecting 'virtualbox' driver from existing profile (alternates: [hyperkit])
I0117 10:41:37.473888 16536 start.go:297] selected driver: virtualbox
I0117 10:41:37.473909 16536 start.go:585] validating driver "virtualbox" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso Memory:2000 CPUs:2 DiskSize:20000 VMDriver:virtualbox ContainerRuntime:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false Downloader: DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true KubernetesConfig:{KubernetesVersion:v1.17.0 NodeIP: NodePort:8443 NodeName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false} HostOnlyNicType:virtio NatNicType:virtio}
I0117 10:41:37.512982 16536 start.go:591] status for virtualbox: {Installed:true Healthy:true Error: Fix: Doc:}
I0117 10:41:37.514287 16536 downloader.go:60] Not caching ISO, using https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso
I0117 10:41:37.514749 16536 profile.go:89] Saving config to /Users/dll/.minikube/profiles/minikube/config.json ...
I0117 10:41:37.515223 16536 cluster.go:102] Skipping create...Using existing machine configuration
💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
I0117 10:41:37.515548 16536 cache_images.go:347] CacheImage: kubernetesui/metrics-scraper:v1.0.2 -> /Users/dll/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2
I0117 10:41:37.515605 16536 main.go:110] libmachine: COMMAND: /usr/local/bin/VBoxManage showvminfo minikube --machinereadable
I0117 10:41:37.515635 16536 cache_images.go:347] CacheImage: k8s.gcr.io/kube-proxy:v1.17.0 -> /Users/dll/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.17.0
I0117 10:41:37.515665 16536 cache_images.go:347] CacheImage: k8s.gcr.io/etcd:3.4.3-0 -> /Users/dll/.minikube/cache/images/k8s.gcr.io/etcd_3.4.3-0
I0117 10:41:37.516322 16536 cache_images.go:412] retrieving image: k8s.gcr.io/kube-proxy:v1.17.0
I0117 10:41:37.516332 16536 cache_images.go:412] retrieving image: index.docker.io/kubernetesui/metrics-scraper:v1.0.2
I0117 10:41:37.516556 16536 cache_images.go:412] retrieving image: k8s.gcr.io/etcd:3.4.3-0
I0117 10:41:37.526610 16536 cache_images.go:347] CacheImage: k8s.gcr.io/pause:3.1 -> /Users/dll/.minikube/cache/images/k8s.gcr.io/pause_3.1
I0117 10:41:37.526712 16536 cache_images.go:412] retrieving image: k8s.gcr.io/pause:3.1
I0117 10:41:37.526991 16536 cache_images.go:347] CacheImage: k8s.gcr.io/kube-addon-manager:v9.0.2 -> /Users/dll/.minikube/cache/images/k8s.gcr.io/kube-addon-manager_v9.0.2
I0117 10:41:37.527022 16536 cache_images.go:412] retrieving image: k8s.gcr.io/kube-addon-manager:v9.0.2
I0117 10:41:37.527133 16536 cache_images.go:347] CacheImage: gcr.io/k8s-minikube/storage-provisioner:v1.8.1 -> /Users/dll/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v1.8.1
I0117 10:41:37.527165 16536 cache_images.go:412] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v1.8.1
I0117 10:41:37.527264 16536 cache_images.go:347] CacheImage: kubernetesui/dashboard:v2.0.0-beta8 -> /Users/dll/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta8
I0117 10:41:37.527298 16536 cache_images.go:412] retrieving image: index.docker.io/kubernetesui/dashboard:v2.0.0-beta8
I0117 10:41:37.529013 16536 cache_images.go:347] CacheImage: k8s.gcr.io/kube-scheduler:v1.17.0 -> /Users/dll/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.17.0
I0117 10:41:37.530317 16536 cache_images.go:412] retrieving image: k8s.gcr.io/kube-scheduler:v1.17.0
I0117 10:41:37.529028 16536 cache_images.go:347] CacheImage: k8s.gcr.io/kube-controller-manager:v1.17.0 -> /Users/dll/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.17.0
I0117 10:41:37.530500 16536 cache_images.go:412] retrieving image: k8s.gcr.io/kube-controller-manager:v1.17.0
I0117 10:41:37.529041 16536 cache_images.go:347] CacheImage: k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/dll/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0
I0117 10:41:37.530650 16536 cache_images.go:412] retrieving image: k8s.gcr.io/kube-apiserver:v1.17.0
I0117 10:41:37.529050 16536 cache_images.go:347] CacheImage: k8s.gcr.io/coredns:1.6.5 -> /Users/dll/.minikube/cache/images/k8s.gcr.io/coredns_1.6.5
I0117 10:41:37.531956 16536 cache_images.go:412] retrieving image: k8s.gcr.io/coredns:1.6.5
I0117 10:41:37.547969 16536 cache_images.go:420] daemon lookup for k8s.gcr.io/kube-apiserver:v1.17.0: Error response from daemon: reference does not exist
I0117 10:41:37.555615 16536 cache_images.go:420] daemon lookup for k8s.gcr.io/pause:3.1: Error response from daemon: reference does not exist
I0117 10:41:37.555810 16536 cache_images.go:420] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v1.8.1: Error response from daemon: reference does not exist
I0117 10:41:37.555885 16536 cache_images.go:420] daemon lookup for index.docker.io/kubernetesui/dashboard:v2.0.0-beta8: Error response from daemon: reference does not exist
I0117 10:41:37.556316 16536 cache_images.go:420] daemon lookup for k8s.gcr.io/kube-addon-manager:v9.0.2: Error response from daemon: reference does not exist
I0117 10:41:37.556500 16536 cache_images.go:420] daemon lookup for k8s.gcr.io/kube-proxy:v1.17.0: Error response from daemon: reference does not exist
I0117 10:41:37.558886 16536 cache_images.go:420] daemon lookup for index.docker.io/kubernetesui/metrics-scraper:v1.0.2: Error response from daemon: reference does not exist
I0117 10:41:37.559289 16536 cache_images.go:420] daemon lookup for k8s.gcr.io/kube-controller-manager:v1.17.0: Error response from daemon: reference does not exist
I0117 10:41:37.559488 16536 cache_images.go:420] daemon lookup for k8s.gcr.io/kube-scheduler:v1.17.0: Error response from daemon: reference does not exist
I0117 10:41:37.559634 16536 cache_images.go:420] daemon lookup for k8s.gcr.io/etcd:3.4.3-0: Error response from daemon: reference does not exist
I0117 10:41:37.559747 16536 cache_images.go:420] daemon lookup for k8s.gcr.io/coredns:1.6.5: Error response from daemon: reference does not exist
I0117 10:41:37.652381 16536 main.go:110] libmachine: STDOUT:
{
name="minikube"
groups="/"
ostype="Linux 2.6 / 3.x / 4.x (64-bit)"
UUID="fff83964-7488-4395-8382-7129224aa82b"
CfgFile="/Users/dll/.minikube/machines/minikube/minikube/minikube.vbox"
SnapFldr="/Users/dll/.minikube/machines/minikube/minikube/Snapshots"
LogFldr="/Users/dll/.minikube/machines/minikube/minikube/Logs"
hardwareuuid="fff83964-7488-4395-8382-7129224aa82b"
memory=2000
pagefusion="off"
vram=8
cpuexecutioncap=100
hpet="on"
chipset="piix3"
firmware="BIOS"
cpus=2
pae="on"
longmode="on"
triplefaultreset="off"
apic="on"
x2apic="off"
cpuid-portability-level=0
bootmenu="disabled"
boot1="dvd"
boot2="dvd"
boot3="disk"
boot4="none"
acpi="on"
ioapic="on"
biosapic="apic"
biossystemtimeoffset=0
rtcuseutc="on"
hwvirtex="on"
nestedpaging="on"
largepages="on"
vtxvpid="on"
vtxux="on"
paravirtprovider="default"
effparavirtprovider="kvm"
VMState="running"
VMStateChangeTime="2020-01-17T02:38:59.507000000"
monitorcount=1
accelerate3d="off"
accelerate2dvideo="off"
teleporterenabled="off"
teleporterport=0
teleporteraddress=""
teleporterpassword=""
tracing-enabled="off"
tracing-allow-vm-access="off"
tracing-config=""
autostart-enabled="off"
autostart-delay=0
defaultfrontend=""
storagecontrollername0="SATA"
storagecontrollertype0="IntelAhci"
storagecontrollerinstance0="0"
storagecontrollermaxportcount0="30"
storagecontrollerportcount0="30"
storagecontrollerbootable0="on"
"SATA-0-0"="/Users/dll/.minikube/machines/minikube/boot2docker.iso"
"SATA-ImageUUID-0-0"="8078a676-0d0e-45f8-9cb4-e8eda42dbb02"
"SATA-tempeject"="off"
"SATA-IsEjected"="off"
"SATA-1-0"="/Users/dll/.minikube/machines/minikube/disk.vmdk"
"SATA-ImageUUID-1-0"="2e0c8a0e-26ce-4365-92d6-ea775112e1d7"
"SATA-2-0"="none"
"SATA-3-0"="none"
"SATA-4-0"="none"
"SATA-5-0"="none"
"SATA-6-0"="none"
"SATA-7-0"="none"
"SATA-8-0"="none"
"SATA-9-0"="none"
"SATA-10-0"="none"
"SATA-11-0"="none"
"SATA-12-0"="none"
"SATA-13-0"="none"
"SATA-14-0"="none"
"SATA-15-0"="none"
"SATA-16-0"="none"
"SATA-17-0"="none"
"SATA-18-0"="none"
"SATA-19-0"="none"
"SATA-20-0"="none"
"SATA-21-0"="none"
"SATA-22-0"="none"
"SATA-23-0"="none"
"SATA-24-0"="none"
"SATA-25-0"="none"
"SATA-26-0"="none"
"SATA-27-0"="none"
"SATA-28-0"="none"
"SATA-29-0"="none"
natnet1="nat"
macaddress1="080027CC10D3"
cableconnected1="on"
nic1="nat"
nictype1="virtio"
nicspeed1="0"
mtu="0"
sockSnd="64"
sockRcv="64"
tcpWndSnd="64"
tcpWndRcv="64"
Forwarding(0)="ssh,tcp,127.0.0.1,50835,,22"
hostonlyadapter2="vboxnet2"
macaddress2="080027D63890"
cableconnected2="on"
nic2="hostonly"
nictype2="virtio"
nicspeed2="0"
nic3="none"
nic4="none"
nic5="none"
nic6="none"
nic7="none"
nic8="none"
hidpointing="ps2mouse"
hidkeyboard="ps2kbd"
uart1="off"
uart2="off"
uart3="off"
uart4="off"
lpt1="off"
lpt2="off"
audio="coreaudio"
audio_in="false"
audio_out="false"
clipboard="disabled"
draganddrop="disabled"
SessionName="headless"
VideoMode="720,400,0"@0,0 1
vrde="off"
usb="off"
ehci="off"
xhci="off"
SharedFolderNameMachineMapping1="Users"
SharedFolderPathMachineMapping1="/Users"
VRDEActiveConnection="off"
VRDEClients=0
videocap="off"
videocap_audio="off"
videocapscreens=0
videocapfile="/Users/dll/.minikube/machines/minikube/minikube/minikube.webm"
videocapres=1024x768
videocaprate=512
videocapfps=25
videocapopts=
GuestMemoryBalloon=0
GuestOSType="Linux26_64"
GuestAdditionsRunLevel=2
GuestAdditionsVersion="5.2.32 r132056"
GuestAdditionsFacility_VirtualBox Base Driver=50,1579228775899
GuestAdditionsFacility_VirtualBox System Service=50,1579228776161
GuestAdditionsFacility_Seamless Mode=0,1579228775898
GuestAdditionsFacility_Graphics Mode=0,1579228775897
}
I0117 10:41:37.652508 16536 main.go:110] libmachine: STDERR:
{
}
I0117 10:41:37.652582 16536 cluster.go:114] Machine state: Running
🏃 Using the running virtualbox "minikube" VM ...
I0117 10:41:37.652660 16536 cluster.go:132] engine options: &{ArbitraryFlags:[] DNS:[] GraphDir: Env:[] Ipv6:false InsecureRegistry:[10.96.0.0/12] Labels:[] LogLevel: StorageDriver: SelinuxEnabled:false TLSVerify:false RegistryMirror:[] InstallURL:https://get.docker.com}
⌛ Waiting for the host to be provisioned ...
I0117 10:41:37.652690 16536 cluster.go:145] configureHost: &{BaseDriver:0xc000358680 VBoxManager:0xc00000e6d0 HostInterfaces:0x30f6a58 b2dUpdater:0x30f6a58 sshKeyGenerator:0x30f6a58 diskCreator:0x30f6a58 logsReader:0x30f6a58 ipWaiter:0x30f6a58 randomInter:0xc00000e6d8 sleeper:0x30f6a58 CPU:2 Memory:2000 DiskSize:20000 NatNicType:virtio Boot2DockerURL:file:///Users/dll/.minikube/cache/iso/minikube-v1.6.0.iso Boot2DockerImportVM: HostDNSResolver:true HostOnlyCIDR:192.168.99.1/24 HostOnlyNicType:virtio HostOnlyPromiscMode:deny UIType:headless HostOnlyNoDHCP:false NoShare:false DNSProxy:false NoVTXCheck:false ShareFolder:}
I0117 10:41:37.652908 16536 cluster.go:167] Configuring auth for driver virtualbox ...
I0117 10:41:37.652922 16536 main.go:110] libmachine: Waiting for SSH to be available...
I0117 10:41:37.652931 16536 main.go:110] libmachine: Getting to WaitForSSH function...
I0117 10:41:37.653471 16536 main.go:110] libmachine: Using SSH client type: native
I0117 10:41:37.653808 16536 main.go:110] libmachine: &{{{ 0 [] [] []} docker [0x13b6220] 0x13b61f0 [] 0s} 127.0.0.1 22 }
I0117 10:41:37.653822 16536 main.go:110] libmachine: About to run SSH command:
exit 0
I0117 10:41:37.653979 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
W0117 10:41:38.631699 16536 cache_images.go:428] authn lookup for index.docker.io/kubernetesui/dashboard:v2.0.0-beta8 (trying anon): Get https://auth.docker.io/token?scope=repository%3Akubernetesui%2Fdashboard%3Apull&service=registry.docker.io: exit status 1
W0117 10:41:38.641785 16536 cache_images.go:428] authn lookup for index.docker.io/kubernetesui/metrics-scraper:v1.0.2 (trying anon): Get https://auth.docker.io/token?scope=repository%3Akubernetesui%2Fmetrics-scraper%3Apull&service=registry.docker.io: exit status 1
I0117 10:41:40.655675 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:41:41.150232 16536 cache_images.go:376] OPENING: /Users/dll/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-beta8
I0117 10:41:41.150243 16536 cache_images.go:376] OPENING: /Users/dll/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2
I0117 10:41:43.657763 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:41:46.661252 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:41:49.665564 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:41:52.670567 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:41:55.675777 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:41:58.678789 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:42:01.679205 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:42:04.683159 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
W0117 10:42:07.552704 16536 cache_images.go:428] authn lookup for k8s.gcr.io/kube-apiserver:v1.17.0 (trying anon): Get https://k8s.gcr.io/v2/: dial tcp 64.233.189.82:443: i/o timeout
W0117 10:42:07.555365 16536 cache_images.go:428] authn lookup for k8s.gcr.io/pause:3.1 (trying anon): Get https://k8s.gcr.io/v2/: dial tcp 64.233.189.82:443: i/o timeout
W0117 10:42:07.556080 16536 cache_images.go:428] authn lookup for k8s.gcr.io/kube-proxy:v1.17.0 (trying anon): Get https://k8s.gcr.io/v2/: dial tcp 64.233.189.82:443: i/o timeout
W0117 10:42:07.556115 16536 cache_images.go:428] authn lookup for k8s.gcr.io/kube-addon-manager:v9.0.2 (trying anon): Get https://k8s.gcr.io/v2/: dial tcp 64.233.189.82:443: i/o timeout
W0117 10:42:07.559742 16536 cache_images.go:428] authn lookup for k8s.gcr.io/coredns:1.6.5 (trying anon): Get https://k8s.gcr.io/v2/: dial tcp 64.233.189.82:443: i/o timeout
W0117 10:42:07.559808 16536 cache_images.go:428] authn lookup for k8s.gcr.io/kube-controller-manager:v1.17.0 (trying anon): Get https://k8s.gcr.io/v2/: dial tcp 64.233.189.82:443: i/o timeout
W0117 10:42:07.559844 16536 cache_images.go:428] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v1.8.1 (trying anon): Get https://gcr.io/v2/: dial tcp 108.177.125.82:443: i/o timeout
W0117 10:42:07.559743 16536 cache_images.go:428] authn lookup for k8s.gcr.io/kube-scheduler:v1.17.0 (trying anon): Get https://k8s.gcr.io/v2/: dial tcp 64.233.189.82:443: i/o timeout
W0117 10:42:07.559753 16536 cache_images.go:428] authn lookup for k8s.gcr.io/etcd:3.4.3-0 (trying anon): Get https://k8s.gcr.io/v2/: dial tcp 64.233.189.82:443: i/o timeout
I0117 10:42:07.686983 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:42:10.691638 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:42:13.694625 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:42:16.696224 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:42:19.700664 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:42:22.701156 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:42:25.701555 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:42:28.704489 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:42:31.709852 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
I0117 10:42:34.710256 16536 main.go:110] libmachine: Error dialing TCP: dial tcp 127.0.0.1:22: connect: connection refused
W0117 10:42:37.555297 16536 cache_images.go:373] unable to retrieve image: Get https://k8s.gcr.io/v2/: dial tcp 108.177.97.82:443: i/o timeout
I0117 10:42:37.555326 16536 cache_images.go:376] OPENING: /Users/dll/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0
W0117 10:42:37.556153 16536 cache_images.go:373] unable to retrieve image: Get https://k8s.gcr.io/v2/: dial tcp 108.177.97.82:443: i/o timeout
I0117 10:42:37.556174 16536 cache_images.go:376] OPENING: /Users/dll/.minikube/cache/images/k8s.gcr.io/pause_3.1
I0117 10:42:37.556548 16536 cache_images.go:349] CacheImage: k8s.gcr.io/pause:3.1 -> /Users/dll/.minikube/cache/images/k8s.gcr.io/pause_3.1 completed in 1m0.030416099s
I0117 10:42:37.556947 16536 cache_images.go:349] CacheImage: k8s.gcr.io/kube-apiserver:v1.17.0 -> /Users/dll/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.17.0 completed in 1m0.028452029s
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x1ec130f]

goroutine 97 [running]:
github.com/google/go-containerregistry/pkg/v1/tarball.Write(0x0, 0xc0005bc090, 0xa, 0xc0005bc09b, 0xe, 0xc0005bc0aa, 0x7, 0x0, 0x0, 0xc000789c68, ...)
/go/pkg/mod/github.com/google/go-containerregistry@v0.0.0-20180731221751-697ee0b3d46e/pkg/v1/tarball/write.go:57 +0x12f
k8s.io/minikube/pkg/minikube/machine.CacheImage(0xc0005bc090, 0x21, 0xc000588320, 0x43, 0x0, 0x0)
/app/pkg/minikube/machine/cache_images.go:395 +0x5df
k8s.io/minikube/pkg/minikube/machine.CacheImages.func1(0xc0006cdf68, 0x0)
/app/pkg/minikube/machine/cache_images.go:85 +0x124
golang.org/x/sync/errgroup.(*Group).Go.func1(0xc00076c210, 0xc00076c2d0)
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:57 +0x64
created by golang.org/x/sync/errgroup.(*Group).Go
/go/pkg/mod/golang.org/x/sync@v0.0.0-20190423024810-112230192c58/errgroup/errgroup.go:54 +0x66

@NoSugarCoffee
Copy link

NoSugarCoffee commented Jan 17, 2020

@Veitor
Copy link

Veitor commented Jan 20, 2020

OMG..Are all Chinese users above?
I found that using --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers" works well. thanks to Alibaba Cloud. hhhhhhh..

However, img variable in Go source code should be checked not equal nil

(This comment is for Chinese users only)

@rajask132
Copy link

did anyone solved with this issue?

@zhumeme
Copy link

zhumeme commented Jan 21, 2020

OMG..Are all Chinese users above?
I found that using --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers" works well. thaks to Alibaba Cloud. hhhhhhh..

However, img variable in Go source code should be checked not equal nil

(This comment is for Chinese users only)

thx,this works for my ubuntu 18.04. :)
--image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers"

@migto
Copy link

migto commented Jan 22, 2020

OMG..Are all Chinese users above?
I found that using --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers" works well. thanks to Alibaba Cloud. hhhhhhh..

However, img variable in Go source code should be checked not equal nil

(This comment is for Chinese users only)

works for my OSX 10.15.2, if you are running into this error, do the following:

minikube delete
minikube start --vm-driver=vmwarefusion --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers"

you can replace vmwarefusion for your own vm-driver

@tstromberg
Copy link
Contributor

Can anyone confirm whether minikube v1.7.0-beta.0 fixes the issue for them? I believe it should.

Thank you.

@jewes
Copy link

jewes commented Jan 28, 2020

OMG..Are all Chinese users above?
I found that using --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers" works well. thanks to Alibaba Cloud. hhhhhhh..

However, img variable in Go source code should be checked not equal nil

(This comment is for Chinese users only)

Works like a charm.

@baurine
Copy link

baurine commented Feb 2, 2020

OMG..Are all Chinese users above?
I found that using --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers" works well. thanks to Alibaba Cloud. hhhhhhh..

However, img variable in Go source code should be checked not equal nil

(This comment is for Chinese users only)

Thank you @Veitor , this works for me as well on macOS 10.15.1 with minikube 1.6.2

> minikube delete
🔥  Deleting "minikube" in virtualbox ...
💔  The "minikube" cluster has been deleted.
🔥  Successfully deleted profile "minikube"
> minikube start --vm-driver=virtualbox --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers"
😄  minikube v1.6.2 on Darwin 10.15.1
✨  Selecting 'virtualbox' driver from user configuration (alternates: [hyperkit])
✅  Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
🔥  Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
💾  Downloading kubeadm v1.17.0
💾  Downloading kubelet v1.17.0
🚜  Pulling images ...
🚀  Launching Kubernetes ...
⌛  Waiting for cluster to come online ...
🏄  Done! kubectl is now configured to use "minikube"

@tstromberg
Copy link
Contributor

Note: this is fixed in minikube 1.7.0!

@PuzoLiang
Copy link

OMG..Are all Chinese users above?
I found that using --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers" works well. thanks to Alibaba Cloud. hhhhhhh..
However, img variable in Go source code should be checked not equal nil
(This comment is for Chinese users only)

Thank you @Veitor , this works for me as well on macOS 10.15.1 with minikube 1.6.2

> minikube delete
🔥  Deleting "minikube" in virtualbox ...
💔  The "minikube" cluster has been deleted.
🔥  Successfully deleted profile "minikube"
> minikube start --vm-driver=virtualbox --image-repository="registry.cn-hangzhou.aliyuncs.com/google_containers"
😄  minikube v1.6.2 on Darwin 10.15.1
✨  Selecting 'virtualbox' driver from user configuration (alternates: [hyperkit])
✅  Using image repository registry.cn-hangzhou.aliyuncs.com/google_containers
🔥  Creating virtualbox VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
💾  Downloading kubeadm v1.17.0
💾  Downloading kubelet v1.17.0
🚜  Pulling images ...
🚀  Launching Kubernetes ...
⌛  Waiting for cluster to come online ...
🏄  Done! kubectl is now configured to use "minikube"

this works to me

@loszer
Copy link

loszer commented Apr 23, 2020

I use shell command to install kubectl and minikube, it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ev/panic issues which contain a panic in minikube kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.