Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dashboard does not work #1151

Closed
thedrow opened this issue Feb 19, 2017 · 21 comments
Closed

Dashboard does not work #1151

thedrow opened this issue Feb 19, 2017 · 21 comments
Labels
kind/support Categorizes issue or PR as a support question.

Comments

@thedrow
Copy link

thedrow commented Feb 19, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Minikube version (use minikube version): 0.16.0

Environment:

  • OS (e.g. from /etc/os-release): Ubuntu 16.10
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): VirtualBox
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep ISO): 1.0.6
  • Install tools: Debian package
  • Others:

What happened:
I typed minikube dashboard.

What you expected to happen:
I expected the dashboard to open on my browser

How to reproduce it (as minimally and precisely as possible):
Install minikube with the debian distribution and type minikube dashboard.

Anything else do we need to know:
These are the logs I got:

minikube dashboard --logtostderr --v=2
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
@LordRoad
Copy link

same error

@aaron-prindle
Copy link
Contributor

aaron-prindle commented Feb 20, 2017

Thanks for your detailed report. It seems the kubernetes-dashboard pod is not initializing properly.
Can you post the output of:
kubectl get po --all-namespaces

@samuelchen
Copy link

samuelchen commented Feb 20, 2017

Same error here. (win7 64bit enterprise, minikube 0.16)

>kubectl get po --all-namespaces
NAMESPACE     NAME                          READY     STATUS              RESTA
TS   AGE
kube-system   kube-addon-manager-minikube   0/1       ContainerCreating   0
     1h

@LordRoad
Copy link

return@return:~$ kubectl get po --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default test-node-3446652760-68z5f 0/1 ContainerCreating 0 17h
kube-system kube-addon-manager-minikube 0/1 ContainerCreating 0 20h

env: ubuntu 14.04

return@return:~$ minikube dashboard
E0220 14:07:09.249443 98414 notify.go:54] Error getting json from minikube version url: Error getting minikube version url via http: Get https://storage.googleapis.com/minikube/releases.json: dial tcp 74.125.204.128:443: i/o timeout
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found
Temporary Error: endpoints "kubernetes-dashboard" not found

@thedrow
Copy link
Author

thedrow commented Feb 21, 2017

The dashboard is just not installed when you initialize minikube for the first time.
After you follow the instructions in the dashboard repo it works just fine.

@r2d4
Copy link
Contributor

r2d4 commented Feb 21, 2017

The dashboard is just not installed when you initialize minikube for the first time.
After you follow the instructions in the dashboard repo it works just fine.

This isn't true. The dashboard sometimes can take some time to start up, which is what I think we're seeing here.

@samuelchen That status indicates that the addon-manager is starting up. The addon-manager is responsible for deploying the dashboard, and sometimes this can take a little time to pull and start in addition to the dashboard starting up.

@r2d4 r2d4 added the kind/support Categorizes issue or PR as a support question. label Feb 26, 2017
@samuelchen
Copy link

samuelchen commented Mar 1, 2017

@r2d4 thanks for the message.

Looks all pods are not ready in my minikube. As you mentioned, it was pulling. I guess if the images website was blocked by firewall.

>kubectl get pods --all-namespaces
NAMESPACE     NAME                              READY     STATUS              RESTARTS   AGE
default       hello-minikube-3015430129-76ztl   0/1       ContainerCreating   0         17h
kube-system   kube-addon-manager-minikube       0/1       ContainerCreating   0         9d

Now the problem is changing to how to set proxy in cluster VM

minikube start --docker-env HTTP_PROXY=http://your-http-proxy-host:your-http-proxy-port --docker-env HTTPS_PROXY=http(s)://your-https-proxy-host:your-https-proxy-port

@r2d4
Copy link
Contributor

r2d4 commented Mar 2, 2017

@samuelchen You need to make sure you delete and start your docker-env settings to take effect

ref #1147

@samuelchen
Copy link

@r2d4 Sorry I did not clarify clearly. I solved it by the --docker-env argument already. It's same as you said.

@r2d4 r2d4 closed this as completed Mar 6, 2017
@hayesgm
Copy link

hayesgm commented Apr 21, 2017

I had a DNS issue in xhyve. To fix the issue, I ran in minikube ssh:

sudo su
$ echo nameserver 8.8.8.8 > /etc/resolv.conf

@zedalaye
Copy link

@hayesgm same problem here. I had to change /etc/systemd/resolved.conf (change #DNS line to DNS=8.8.8.8) and restart systemd-resolved service ($ systemctl restart systemd-resolved)

@dkoloditch
Copy link

@zedalaye's solution worked for me coupled with minikube stop and minikube start.

@roachmd
Copy link

roachmd commented Aug 13, 2017

Stop and Start also worked for me with
minikube start --kubernetes-version v1.7.3

@cre8
Copy link

cre8 commented Aug 14, 2017

In my case taking a coffee worked. Kubernetes need some time to get the pods ready (you can check it with kubectl get pods --all-namespaces). Depending on your internet connection it can take a bit longer (I needed about 7 minutes)

at beginning:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                          READY     STATUS              RESTARTS   AGE
kube-system   kube-addon-manager-minikube   0/1       ContainerCreating   0          3m

after 3 min:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                          READY     STATUS              RESTARTS   AGE
kube-system   kube-addon-manager-minikube   1/1       Running             0          4m
kube-system   kube-dns-910330662-xbt1b      0/3       ContainerCreating   0          17s
kube-system   kubernetes-dashboard-c5x10    0/1       ContainerCreating   0          17s

after 7 min:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                          READY     STATUS              RESTARTS   AGE
kube-system   kube-addon-manager-minikube   1/1       Running             0          6m
kube-system   kube-dns-910330662-xbt1b      0/3       ContainerCreating   0          2m
kube-system   kubernetes-dashboard-c5x10    1/1       Running             0          2m

@beast
Copy link

beast commented Sep 7, 2017

When I start with 1.7.4 version flag. All pods empty:
yangyaos-iMac:deployment yangyao$ k8allpods No resources found.

When I start with 1.7.0 default, then everything is working. Is anyone else encountering this behaviour?

Starting local Kubernetes v1.7.0 cluster...
Starting VM...
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
yangyaos-iMac:deployment yangyao$ k8allpods
NAMESPACE     NAME                           READY     STATUS              RESTARTS   AGE
default       greeter-srv-2395100891-pkch2   0/1       ErrImageNeverPull   0          48m
kube-system   kube-addon-manager-minikube    1/1       Running             3          23h
kube-system   kube-dns-910330662-zp5ks       3/3       Running             9          23h
kube-system   kubernetes-dashboard-sh55m     1/1       Running             4          23h

@galvesribeiro
Copy link

image

Same problem here... 8hs and nothing happen... Is there a way to see logs to try understand what is happening?

Thanks! Appreciate any help.

@mingoal
Copy link

mingoal commented Feb 13, 2018

@galvesribeiro you can use minikube logs to check logs. I met the same issue as proxy config is required for docker. Check more on
https://stackoverflow.com/questions/23111631/cannot-download-docker-images-behind-a-proxy
After that, I can start the addon manage success

@galvesribeiro
Copy link

Thanks @mingoal

I'm dropping Minikube in favor of Docker4Windows that now natively have Kube cluster local.

Thanks!

@DennisMao
Copy link

One of the reasons is GFW.Check as steps
1.Use minikube logs,check logs.
2.Log Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
3.Solution:
Download the image from an available mirror :
docker pull k8s-docker.mydomain.com/google_containers/pause-amd64:3.0
Set the tag
docker tag k8s-docker.mydomain.com/google_containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0

@ningg
Copy link

ningg commented Aug 15, 2018

I meet the same problem.

$ kubectl get po --all-namespaces
NAMESPACE     NAME                                    READY     STATUS             RESTARTS   AGE
kube-system   etcd-minikube                           1/1       Running            0          2m
kube-system   kube-addon-manager-minikube             1/1       Running            4          2h
kube-system   kube-apiserver-minikube                 1/1       Running            0          2m
kube-system   kube-controller-manager-minikube        1/1       Running            0          2m
kube-system   kube-dns-86f4d74b45-x2gn8               3/3       Running            8          2h
kube-system   kube-proxy-62mwx                        1/1       Running            0          1m
kube-system   kube-scheduler-minikube                 1/1       Running            0          2m
kube-system   kubernetes-dashboard-5498ccf677-fh2rj   0/1       CrashLoopBackOff   11         2h
kube-system   storage-provisioner                     1/1       Running            4          2h

The solution is:

$ minikube stop
$ minikube delete
$ minikube start

@chrisxaustin
Copy link

chrisxaustin commented Sep 12, 2018

Having a similar problem.
Minikube 0.28.2
Docker 18.06.1-ce, build e68fc7a
Windows 10 Pro (10.0.17134 Build 17134) w/ Hyper-V

Minikube didn't work initially, but this fixed it:

minikube delete
rm -r -fo C:\Users\chris\.minikube

Then in Hyper-V manager:

  • Virtual Switches
  • Create internal, named minikube

Open system network settings:

  • right-click my eth interface, sharing
  • Allow other network users to connect through this computer's internet connection
  • select vEthernet (minikube)

minikube start --vm-driver hyperv --hyperv-virtual-switch "minikube" -v10

minikube dashboard worked after this and I was able to build docker images (with minikube docker-env | Invoke-Expression within powershell), then apply my yaml files successfully.

I was away for a few hours and came back to find that minikube dashboard stopped working, and minikube stop seems to hang at 0%. I can still run minikube ip and minikube ssh. I suspect that it failed because the system's sleep mode was triggered by inactivity.

Running docker ps within the minikube vm:

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e388aa26c69c 6f7f2dc7fab5 "/sidecar --v=2 --lo…" About a minute ago Up About a minute k8s_sidecar_kube-dns-86f4d74b45-4vkxk_kube-system_59c486fa-b530-11e8-9f22-00155d013821_71
55dfef604345 c2ce1ffb51ed "/dnsmasq-nanny -v=2…" 2 minutes ago Up 2 minutes k8s_dnsmasq_kube-dns-86f4d74b45-4vkxk_kube-system_59c486fa-b530-11e8-9f22-00155d013821_76
bbcea0441e79 8678c9e3bade "/bin/sh -c /entrypo…" 2 minutes ago Up 2 minutes k8s_portal_portal-0_default_0356ea5c-b532-11e8-9f22-00155d013821_43
fdf2f803162a 704ba848e69a "kube-scheduler --ad…" 4 minutes ago Up 4 minutes k8s_kube-scheduler_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_72
ec72de70489a ad86dbed1555 "kube-controller-man…" 4 minutes ago Up 4 minutes k8s_kube-controller-manager_kube-controller-manager-minikube_kube-system_e6e3dc8d66c53f1851ade03f937a1029_73
7c02f71da03f 8678c9e3bade "/bin/sh -c /entrypo…" 7 minutes ago Up 7 minutes k8s_portal_portal-1_default_0491a0d4-b532-11e8-9f22-00155d013821_18
7a001930d5ef e94d2f21bc0c "/dashboard --insecu…" 9 minutes ago Up 8 minutes k8s_kubernetes-dashboard_kubernetes-dashboard-5498ccf677-w6vn5_kube-system_5a819bae-b530-11e8-9f22-00155d013821_115
5aff3886e4d6 52920ad46f5b "etcd --listen-clien…" 5 hours ago Up 5 hours k8s_etcd_etcd-minikube_kube-system_c853174ae00ff4258aa2bbd139415e30_16
84472baca111 k8s.gcr.io/pause-amd64:3.1 "/pause" 28 hours ago Up 28 hours k8s_POD_pv-samples-58d957bc8-9b2lm_default_ba46583f-b536-11e8-9f22-00155d013821_0
52cc5901272b 89546c82c920 "docker-entrypoint.s…" 28 hours ago Up 28 hours k8s_postgres-pv_postgres-pv-0_default_eb30899b-b533-11e8-9f22-00155d013821_0
c3149929ceac k8s.gcr.io/pause-amd64:3.1 "/pause" 28 hours ago Up 28 hours k8s_POD_postgres-pv-0_default_eb30899b-b533-11e8-9f22-00155d013821_0
705c35dcd520 k8s.gcr.io/pause-amd64:3.1 "/pause" 28 hours ago Up 28 hours k8s_POD_portal-1_default_0491a0d4-b532-11e8-9f22-00155d013821_0
e04740f83377 k8s.gcr.io/pause-amd64:3.1 "/pause" 28 hours ago Up 28 hours k8s_POD_portal-0_default_0356ea5c-b532-11e8-9f22-00155d013821_0
e5374f05f61c gcr.io/k8s-minikube/storage-provisioner "/storage-provisioner" 29 hours ago Up 29 hours k8s_storage-provisioner_storage-provisioner_kube-system_5b22f855-b530-11e8-9f22-00155d013821_0
22394d83448e k8s.gcr.io/kube-proxy-amd64 "/usr/local/bin/kube…" 29 hours ago Up 29 hours k8s_kube-proxy_kube-proxy-lllgx_kube-system_59e0a791-b530-11e8-9f22-00155d013821_0
1d06739bf7c5 k8s.gcr.io/pause-amd64:3.1 "/pause" 29 hours ago Up 29 hours k8s_POD_storage-provisioner_kube-system_5b22f855-b530-11e8-9f22-00155d013821_0
81b8d03dc57c k8s.gcr.io/pause-amd64:3.1 "/pause" 29 hours ago Up 29 hours k8s_POD_kubernetes-dashboard-5498ccf677-w6vn5_kube-system_5a819bae-b530-11e8-9f22-00155d013821_0
478e10ec0667 k8s.gcr.io/pause-amd64:3.1 "/pause" 29 hours ago Up 29 hours k8s_POD_kube-proxy-lllgx_kube-system_59e0a791-b530-11e8-9f22-00155d013821_0
6852ad7755aa k8s.gcr.io/pause-amd64:3.1 "/pause" 29 hours ago Up 29 hours k8s_POD_kube-dns-86f4d74b45-4vkxk_kube-system_59c486fa-b530-11e8-9f22-00155d013821_0
0bf2cb3fa24e k8s.gcr.io/kube-addon-manager "/opt/kube-addons.sh" 29 hours ago Up 29 hours k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0
07e25cea8db5 k8s.gcr.io/pause-amd64:3.1 "/pause" 29 hours ago Up 29 hours k8s_POD_kube-scheduler-minikube_kube-system_31cf0ccbee286239d451edb6fb511513_0
fa121fcc1c0b k8s.gcr.io/pause-amd64:3.1 "/pause" 29 hours ago Up 29 hours k8s_POD_kube-apiserver-minikube_kube-system_48beea5682ed20a56002cabd8ad7c00c_0
b0e86f7454a4 k8s.gcr.io/pause-amd64:3.1 "/pause" 29 hours ago Up 29 hours k8s_POD_kube-controller-manager-minikube_kube-system_e6e3dc8d66c53f1851ade03f937a1029_0
a4dce4431339 k8s.gcr.io/pause-amd64:3.1 "/pause" 29 hours ago Up 29 hours k8s_POD_etcd-minikube_kube-system_c853174ae00ff4258aa2bbd139415e30_0
271a064bbdc5 k8s.gcr.io/pause-amd64:3.1 "/pause" 29 hours ago Up 29 hours k8s_POD_kube-addon-manager-minikube_kube-system_3afaf06535cc3b85be93c31632b765da_0

So the dashboard is running.

minikube logs has a few hints:

Sep 12 00:08:21 minikube kubelet[3001]: E0912 00:08:21.583681 3001 event.go:209] Unable to write event: 'Patch https://localhost:8443/api/v1/namespaces/default/events/pv-samples-58d957bc8-9b2lm.15532351be57352d: dial tcp 127.0.0.1:8443: getsockopt: connection refused' (may retry after sleeping)
Sep 12 00:08:22 minikube kubelet[3001]: E0912 00:08:22.184237 3001 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 12 00:08:22 minikube kubelet[3001]: E0912 00:08:22.184247 3001 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 12 00:08:22 minikube kubelet[3001]: E0912 00:08:22.184822 3001 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 12 00:08:23 minikube kubelet[3001]: E0912 00:08:23.185438 3001 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 12 00:08:23 minikube kubelet[3001]: E0912 00:08:23.189499 3001 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 12 00:08:23 minikube kubelet[3001]: E0912 00:08:23.190137 3001 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 12 00:08:24 minikube kubelet[3001]: E0912 00:08:24.186705 3001 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 12 00:08:24 minikube kubelet[3001]: E0912 00:08:24.190541 3001 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused
Sep 12 00:08:24 minikube kubelet[3001]: E0912 00:08:24.193420 3001 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: getsockopt: connection refused

Checked /var/log/pods/5a819bae-b530-11e8-9f22-00155d013821/kubernetes-dashboard/120.log

{"log":"2018/09/12 00:16:06 Error while initializing connection to Kubernetes apiserver. This most likely means that the cluster is misconfigured (e.g., it has invalid apiserver certificates or service accounts configuration) or the --apiserver-host param points to a server that does not exist. Reason: Get https://10.96.0.1:443/version: dial tcp 10.96.0.1:443: getsockopt: connection refused\n","stream":"stdout","time":"2018-09-12T00:16:06.190186566Z"}

Confirmed that the apiserver is running within the minikube vm using ps auxw | grep apiserver:

root 39732 27.3 10.0 500516 204488 ? Ssl 00:18 0:34 kube-apiserver --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --requestheader-allowed-names=front-proxy-client --client-ca-file=/var/lib/localkube/certs/ca.crt --tls-private-key-file=/var/lib/localkube/certs/apiserver.key --requestheader-client-ca-file=/var/lib/localkube/certs/front-proxy-ca.crt --tls-cert-file=/var/lib/localkube/certs/apiserver.crt --proxy-client-key-file=/var/lib/localkube/certs/front-proxy-client.key --insecure-port=0 --enable-bootstrap-token-auth=true --requestheader-username-headers=X-Remote-User --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/var/lib/localkube/certs/sa.pub --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-group-headers=X-Remote-Group --advertise-address=192.168.137.34 --proxy-client-cert-file=/var/lib/localkube/certs/front-proxy-client.crt --allow-privileged=true --requestheader-extra-headers-prefix=X-Remote-Extra- --kubelet-client-certificate=/var/lib/localkube/certs/apiserver-kubelet-client.crt --kubelet-client-key=/var/lib/localkube/certs/apiserver-kubelet-client.key --secure-port=8443 --authorization-mode=Node,RBAC --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/var/lib/localkube/certs/etcd/ca.crt --etcd-certfile=/var/lib/localkube/certs/apiserver-etcd-client.crt --etcd-keyfile=/var/lib/localkube/certs/apiserver-etcd-client.key

Checked if anything was listening on 8443 from within the minikube vm using netstat -an | grep 8443

tcp 8 0 :::8443 :::* LISTEN

But I can't telnet to 8443 from that vm:

$ telnet 127.0.0.1 8443
telnet: can't connect to remote host (127.0.0.1): Connection refused

I then went back to check the steps above, and now it's not listening on 844, kube-apiserver isn't running.

Checked the apiserver logs.

{"log":"E0912 00:26:07.633952 1 controller_utils.go:1022] Unable to sync caches for crd-autoregister controller\n","stream":"stderr","time":"2018-09-12T00:26:07.634842899Z"}
{"log":"I0912 00:26:07.636141 1 crdregistration_controller.go:115] Shutting down crd-autoregister controller\n","stream":"stderr","time":"2018-09-12T00:26:07.636182092Z"}
{"log":"E0912 00:26:07.636307 1 customresource_discovery_controller.go:177] timed out waiting for caches to sync\n","stream":"stderr","time":"2018-09-12T00:26:07.636342391Z"}
{"log":"I0912 00:26:07.636370 1 customresource_discovery_controller.go:178] Shutting down DiscoveryController\n","stream":"stderr","time":"2018-09-12T00:26:07.636786889Z"}
{"log":"E0912 00:26:07.636872 1 cache.go:35] Unable to sync caches for AvailableConditionController controller\n","stream":"stderr","time":"2018-09-12T00:26:07.639446976Z"}
{"log":"I0912 00:26:07.636891 1 crd_finalizer.go:246] Shutting down CRDFinalizer\n","stream":"stderr","time":"2018-09-12T00:26:07.639457476Z"}
{"log":"E0912 00:26:07.636949 1 cache.go:35] Unable to sync caches for APIServiceRegistrationController controller\n","stream":"stderr","time":"2018-09-12T00:26:07.639461876Z"}
{"log":"I0912 00:26:07.639598 1 controller.go:90] Shutting down OpenAPI AggregationController\n","stream":"stderr","time":"2018-09-12T00:26:07.650152822Z"}
{"log":"I0912 00:26:07.643236 1 serve.go:136] Stopped listening on [::]:8443\n","stream":"stderr","time":"2018-09-12T00:26:07.650170322Z"}
{"log":"I0912 00:26:07.799359 1 available_controller.go:266] Shutting down AvailableConditionController\n","stream":"stderr","time":"2018-09-12T00:26:07.799502972Z"}
{"log":"E0912 00:26:07.799583 1 status.go:64] apiserver received an error that is not an metav1.Status: http: Handler timeout\n","stream":"stderr","time":"2018-09-12T00:26:07.799662471Z"}
{"log":"I0912 00:26:07.994116 1 naming_controller.go:280] Shutting down NamingConditionController\n","stream":"stderr","time":"2018-09-12T00:26:07.994243293Z"}
{"log":"I0912 00:26:08.216497 1 apiservice_controller.go:94] Shutting down APIServiceRegistrationController\n","stream":"stderr","time":"2018-09-12T00:26:08.216574878Z"}
...
{"log":"E0912 00:26:09.095040 1 authentication.go:63] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, Token has been invalidated]]\n","stream":"stderr","time":"2018-09-12T00:26:09.095142572Z"}
{"log":"E0912 00:26:09.095396 1 authentication.go:63] Unable to authenticate the request due to an error: [invalid bearer token, [invalid bearer token, Token has been invalidated]]\n","stream":"stderr","time":"2018-09-12T00:26:09.09545807Z"}
{"log":"E0912 00:26:09.095642 1 reflector.go:205] k8s.io/kubernetes/vendor/k8s.io/apiextensions-apiserver/pkg/client/informers/internalversion/factory.go:74: Failed to list *apiextensions.CustomResourceDefinition: an error on the server ("apiserver is shutting down.") has prevented the request from succeeding (get customresourcedefinitions.apiextensions.k8s.io)\n","stream":"stderr","time":"2018-09-12T00:26:09.095700069Z"}
...
{"log":"F0912 00:26:10.358672 1 controller.go:135] Unable to perform initial IP allocation check: unable to refresh the service IP block: Get https://127.0.0.1:8443/api/v1/services: http2: no cached connection was available\n","stream":"stderr","time":"2018-09-12T00:26:10.414240766Z"}

At a bit of a loss. Minikube worked fine on my mac, but has been nothing but pain on windows w/ hyper-v.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question.
Projects
None yet
Development

No branches or pull requests