New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'Could not find finalized endpoint', and 'connection was refused' after running 'minikube dashboard' #1634

Closed
Anton-Latukha opened this Issue Jun 22, 2017 · 11 comments

Comments

Projects
None yet
7 participants
@Anton-Latukha

Anton-Latukha commented Jun 22, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

BUG REPORT

Minikube version (use minikube version):

minikube version: v0.19.1

And tried update to:

minikube version: v0.20.0

Then tried downgraded to:

minikube version: v0.18.0

Environment:

  • OS (e.g. from /etc/os-release):
Arch Linux
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName):
"DriverName": "virtualbox",
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION):
"Boot2DockerURL": "file:///home/pyro/.minikube/cache/iso/minikube-v0.18.0.iso",
  • Install tools:

Packages:
AUR: minikube-bin

and downgraded to 0.18 with:
AUR: minikube

  • Others:

What happened:

Was learning Kubernetes, and from yesterday it happen, and still to today, Kubectl could not communicate with server.

At first:

$ minikube dashboard
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Error validating service: Error getting service kubernetes-dashboard: Get https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kubernetes-dashboard: dial tcp 192.168.99.100:8443: getsockopt: connection refused

Then after time only this:

kubectl get services
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?
kubectl get pods -o wide
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?

What you expected to happen:

Work normally as before.

How to reproduce it (as minimally and precisely as possible):

Linux.
POSIX-compilant shell as echo $SHELL.
As interactive shell use Fish (with minimally-proper scripting interactive shell doesn't meter).

QEMU-KVM setup on host.

User is in the proper kvm group for your distribution. (user fully operative with KVM ).

Using as a user minikube start --vm-driver kvm.

But having: "DriverName": "virtualbox",

Have several pods in Kubernetes, and they work normally at first.

Pods, scaling works,
Services available.

Was run minikube dashboard and have it running for a long time on one of browser tabs.

And then after a while, trying back, to run commands and get:

$ minikube dashboard
Could not find finalized endpoint being pointed to by kubernetes-dashboard: Error validating service: Error getting service kubernetes-dashboard: Get https://192.168.99.100:8443/api/v1/namespaces/kube-system/services/kubernetes-dashboard: dial tcp 192.168.99.100:8443: getsockopt: connection refused

And today I only get:

$ kubectl get services
The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?

Anything else do we need to know:

@r2d4

This comment has been minimized.

Show comment
Hide comment
@r2d4

r2d4 Jun 22, 2017

Member

Have you ran minikube delete after changing versions? Unfortunately we don't support in place upgrades yet, you'll have to recreate your cluster. If you've tried that, supplying the output of 'minikube logs' would be helpful

Member

r2d4 commented Jun 22, 2017

Have you ran minikube delete after changing versions? Unfortunately we don't support in place upgrades yet, you'll have to recreate your cluster. If you've tried that, supplying the output of 'minikube logs' would be helpful

@Anton-Latukha

This comment has been minimized.

Show comment
Hide comment
@Anton-Latukha

Anton-Latukha Jun 22, 2017

Doing what you say. Having issue.

I am on 0.20 now:

~> minikube start --vm-driver kvm
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
Moving files into cluster...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.

~> minikube stop
Stopping local Kubernetes cluster...
Machine stopped.

~> minikube delete
Deleting local Kubernetes cluster...
Machine deleted.

~> minikube start --vm-driver kvm
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
Downloading Minikube ISO
 90.95 MB / 90.95 MB [==============================================] 100.00% 0s
E0622 18:55:23.176602   27962 start.go:127] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: [Code-9] [Domain-20] operation failed: domain 'minikube' already exists with uuid 839ac557-6f87-49e5-9c82-13391c529867.

 Retrying.
E0622 18:55:23.176956   27962 start.go:133] Error starting host:  Error creating host: Error creating machine: Error in driver during machine creation: [Code-9] [Domain-20] operation failed: domain 'minikube' already exists with uuid 839ac557-6f87-49e5-9c82-13391c529867
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
	minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]: 

pyro@Archer ~> 

Anton-Latukha commented Jun 22, 2017

Doing what you say. Having issue.

I am on 0.20 now:

~> minikube start --vm-driver kvm
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
Moving files into cluster...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.

~> minikube stop
Stopping local Kubernetes cluster...
Machine stopped.

~> minikube delete
Deleting local Kubernetes cluster...
Machine deleted.

~> minikube start --vm-driver kvm
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
Downloading Minikube ISO
 90.95 MB / 90.95 MB [==============================================] 100.00% 0s
E0622 18:55:23.176602   27962 start.go:127] Error starting host: Error creating host: Error creating machine: Error in driver during machine creation: [Code-9] [Domain-20] operation failed: domain 'minikube' already exists with uuid 839ac557-6f87-49e5-9c82-13391c529867.

 Retrying.
E0622 18:55:23.176956   27962 start.go:133] Error starting host:  Error creating host: Error creating machine: Error in driver during machine creation: [Code-9] [Domain-20] operation failed: domain 'minikube' already exists with uuid 839ac557-6f87-49e5-9c82-13391c529867
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
	minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]: 

pyro@Archer ~> 
@Anton-Latukha

This comment has been minimized.

Show comment
Hide comment
@Anton-Latukha

Anton-Latukha Jun 22, 2017

Ok. Came on-line.

~> minikube delete
Deleting local Kubernetes cluster...
Machine deleted.

~> minikube start
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
Moving files into cluster...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.

~> kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.0.0.1     <none>        443/TCP   22s

Seems like at least this problem is with --vm-driver kvm.

Maybe the main problem also.

Because I keep stubborn and keep using minikube start --vm-driver kvm.
And minikube keep using VBoxHeadless.

Anton-Latukha commented Jun 22, 2017

Ok. Came on-line.

~> minikube delete
Deleting local Kubernetes cluster...
Machine deleted.

~> minikube start
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
Moving files into cluster...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.

~> kubectl get services
NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   10.0.0.1     <none>        443/TCP   22s

Seems like at least this problem is with --vm-driver kvm.

Maybe the main problem also.

Because I keep stubborn and keep using minikube start --vm-driver kvm.
And minikube keep using VBoxHeadless.

@r2d4

This comment has been minimized.

Show comment
Hide comment
@r2d4

r2d4 Jun 22, 2017

Member

minikube start without any flags defaults to virtualbox. You'll need to run minikube start for the first time with vm-driver=kvm for it to take effect. You can set kvm as the default with minikube config set vm-driver kvm

Member

r2d4 commented Jun 22, 2017

minikube start without any flags defaults to virtualbox. You'll need to run minikube start for the first time with vm-driver=kvm for it to take effect. You can set kvm as the default with minikube config set vm-driver kvm

@Anton-Latukha

This comment has been minimized.

Show comment
Hide comment
@Anton-Latukha

Anton-Latukha Jun 22, 2017

~> minikube delete
Deleting local Kubernetes cluster...
Machine deleted.

~> minikube config set vm-driver kvm
These changes will take effect upon a minikube delete and then a minikube start

~> minikube delete
Deleting local Kubernetes cluster...
Errors occurred deleting machine:  Error deleting host: minikube: Error loading host from store: Host does not exist: "minikube"

~> grep DriverName ~/.minikube/machines/minikube/config.json
    "DriverName": "kvm",

My bad. My KVM network default was not online.

I use manually created br0 for bridging of all KVM machines. It can be a feature to use already created bridge interfaces. But that is additional.

'docker-machines' must be isolated virtual network. Minikube does not respond, if dhcp changes virtual machine IP on eth1 on boot.

On virtual machine login is root 8).

It works after I sort those out.

Enabled 'virtio' for HDD, and both networking interfaces. Support of virtio can be a feature.

Anton-Latukha commented Jun 22, 2017

~> minikube delete
Deleting local Kubernetes cluster...
Machine deleted.

~> minikube config set vm-driver kvm
These changes will take effect upon a minikube delete and then a minikube start

~> minikube delete
Deleting local Kubernetes cluster...
Errors occurred deleting machine:  Error deleting host: minikube: Error loading host from store: Host does not exist: "minikube"

~> grep DriverName ~/.minikube/machines/minikube/config.json
    "DriverName": "kvm",

My bad. My KVM network default was not online.

I use manually created br0 for bridging of all KVM machines. It can be a feature to use already created bridge interfaces. But that is additional.

'docker-machines' must be isolated virtual network. Minikube does not respond, if dhcp changes virtual machine IP on eth1 on boot.

On virtual machine login is root 8).

It works after I sort those out.

Enabled 'virtio' for HDD, and both networking interfaces. Support of virtio can be a feature.

@dysinger

This comment has been minimized.

Show comment
Hide comment
@dysinger

dysinger Jul 20, 2017

I had the same experience with upgrade, downgrade, downgrade, downgrade .... couldn't get any version of minikube w/ kvm to work.

Symptoms:

user@oryx ~/s/s/devops> minikube start --vm-driver=kvm
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
E0719 21:51:48.201927   17536 start.go:127] Error starting host: Error creating new host: Error attempting to get plugin server address for RPC: Failed to dial the plugin server in 10s.

 Retrying.
E0719 21:51:48.202346   17536 start.go:133] Error starting host:  Error creating new host: Error attempting to get plugin server address for RPC: Failed to dial the plugin server in 10s
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
        minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
Y

dysinger commented Jul 20, 2017

I had the same experience with upgrade, downgrade, downgrade, downgrade .... couldn't get any version of minikube w/ kvm to work.

Symptoms:

user@oryx ~/s/s/devops> minikube start --vm-driver=kvm
Starting local Kubernetes v1.6.4 cluster...
Starting VM...
E0719 21:51:48.201927   17536 start.go:127] Error starting host: Error creating new host: Error attempting to get plugin server address for RPC: Failed to dial the plugin server in 10s.

 Retrying.
E0719 21:51:48.202346   17536 start.go:133] Error starting host:  Error creating new host: Error attempting to get plugin server address for RPC: Failed to dial the plugin server in 10s
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
        minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:
Y
@dysinger

This comment has been minimized.

Show comment
Hide comment
@dysinger

dysinger Jul 20, 2017

This has worked for the last 9 months without hassle until today

dysinger commented Jul 20, 2017

This has worked for the last 9 months without hassle until today

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Jan 1, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

fejta-bot commented Jan 1, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Jan 31, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

fejta-bot commented Jan 31, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@shermaneric

This comment has been minimized.

Show comment
Hide comment
@shermaneric

shermaneric Feb 22, 2018

I didn't open this but ran into this problem earlier.
minikube version: v0.25.0

I was using --vm-driver=xhyve
Switching over to the recommended --vm-driver=hyperkit this problem went away ..

shermaneric commented Feb 22, 2018

I didn't open this but ran into this problem earlier.
minikube version: v0.25.0

I was using --vm-driver=xhyve
Switching over to the recommended --vm-driver=hyperkit this problem went away ..

@fejta-bot

This comment has been minimized.

Show comment
Hide comment
@fejta-bot

fejta-bot Mar 25, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

fejta-bot commented Mar 25, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment