New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minikube ipv4/ipv6 problem #2150

Closed
rdcm opened this Issue Nov 3, 2017 · 8 comments

Comments

Projects
None yet
8 participants
@rdcm
Copy link

rdcm commented Nov 3, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
Bug report

Please provide the following details:

Case 1: With disable ipv6 in connection settings, i try this:

minikube start --hyperv-virtual-switch "Primary Virtual Switch" --vm-driver=hyperv
This command does not end, but vm minikube machine created and run.

minikube status:

E1103 19:41:56.626171    4200 status.go:69] Error getting cluster bootstrapper: getting localkube bootstrapper: getting ssh client: Error dialing tcp via ssh client: dial tcp [fe80::215:5dff:fe5a:6636]:22: connectex: A socket operation was attempted to an unreachable network.
================================================================================
An error has occurred. Would you like to opt in to sending anonymized crash
information to minikube to help prevent future errors?
To opt out of these messages, run the command:
        minikube config set WantReportErrorPrompt false
================================================================================
Please enter your response [Y/n]:

Case 2: With enable ipv6 in connection settings, i try this:

minikube start --hyperv-virtual-switch "Primary Virtual Switch" --vm-driver=hyperv

Command output:

To disable this notification, run the following:
minikube config set WantUpdateNotification false
Starting local Kubernetes v1.7.5 cluster...
Starting VM...
Downloading Minikube ISO
 139.09 MB / 139.09 MB [============================================] 100.00% 0s
Getting VM IP address...
Moving files into cluster...
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.

then

minikube dashboard

command output:

Could not find finalized endpoint being pointed to by kubernetes-dashboard: Error validating service: Error getting service kubernetes-dashboard: Get https://[fe80::215:5dff:fe5a:6638]:8443/api/v1/namespaces/kube-system/services/kubernetes-dashboard: dial tcp [fe80::215:5dff:fe5a:6638]:8443: connectex: No connection could be made because the target machine actively refused it.

minikube logs:

Nov 03 16:55:31 minikube systemd[1]: Starting Localkube...
Nov 03 16:55:31 minikube localkube[4132]: proto: duplicate proto type registered: google.protobuf.Any
Nov 03 16:55:31 minikube localkube[4132]: proto: duplicate proto type registered: google.protobuf.Duration
Nov 03 16:55:31 minikube localkube[4132]: proto: duplicate proto type registered: google.protobuf.Timestamp
Nov 03 16:55:31 minikube localkube[4132]: listening for peers on http://localhost:2380
Nov 03 16:55:31 minikube localkube[4132]: listening for client requests on localhost:2379
Nov 03 16:55:31 minikube localkube[4132]: name = default
Nov 03 16:55:31 minikube localkube[4132]: data dir = /var/lib/localkube/etcd
Nov 03 16:55:31 minikube localkube[4132]: member dir = /var/lib/localkube/etcd/member
Nov 03 16:55:31 minikube localkube[4132]: heartbeat = 100ms
Nov 03 16:55:31 minikube localkube[4132]: election = 1000ms
Nov 03 16:55:31 minikube localkube[4132]: snapshot count = 10000
Nov 03 16:55:31 minikube localkube[4132]: advertise client URLs = http://localhost:2379
Nov 03 16:55:31 minikube localkube[4132]: restarting member 8e9e05c52164694d in cluster cdf818194e3a8c32 at commit index 64
Nov 03 16:55:31 minikube localkube[4132]: 8e9e05c52164694d became follower at term 32
Nov 03 16:55:31 minikube localkube[4132]: newRaft 8e9e05c52164694d [peers: [], term: 32, commit: 64, applied: 0, lastindex: 64, lastterm: 32]
Nov 03 16:55:31 minikube localkube[4132]: starting server... [version: 3.1.5, cluster version: to_be_decided]
Nov 03 16:55:31 minikube localkube[4132]: added member 8e9e05c52164694d [http://localhost:2380] to cluster cdf818194e3a8c32
Nov 03 16:55:31 minikube localkube[4132]: set the initial cluster version to 3.1
Nov 03 16:55:31 minikube localkube[4132]: enabled capabilities for version 3.1
Nov 03 16:55:32 minikube localkube[4132]: 8e9e05c52164694d is starting a new election at term 32
Nov 03 16:55:32 minikube localkube[4132]: 8e9e05c52164694d became candidate at term 33
Nov 03 16:55:32 minikube localkube[4132]: 8e9e05c52164694d received MsgVoteResp from 8e9e05c52164694d at term 33
Nov 03 16:55:32 minikube localkube[4132]: 8e9e05c52164694d became leader at term 33
Nov 03 16:55:32 minikube localkube[4132]: I1103 16:55:32.231685    4132 etcd.go:58] Etcd server is ready
Nov 03 16:55:32 minikube localkube[4132]: localkube host ip address: <nil>
Nov 03 16:55:32 minikube localkube[4132]: Starting apiserver...
Nov 03 16:55:32 minikube localkube[4132]: Waiting for apiserver to be healthy...
Nov 03 16:55:32 minikube localkube[4132]: raft.node: 8e9e05c52164694d elected leader 8e9e05c52164694d at term 33
Nov 03 16:55:32 minikube localkube[4132]: I1103 16:55:32.233156    4132 server.go:112] Version: v1.7.5
Nov 03 16:55:32 minikube localkube[4132]: apiserver: Exit with error: Unable to find suitable network address.error='Unable to select an IP.'. Try to set the AdvertiseAddress directly or provide a valid BindAddress to fix this.
Nov 03 16:55:32 minikube localkube[4132]: published {Name:default ClientURLs:[http://localhost:2379]} to cluster cdf818194e3a8c32
Nov 03 16:55:32 minikube localkube[4132]: ready to serve client requests
Nov 03 16:55:32 minikube localkube[4132]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!
Nov 03 16:55:32 minikube localkube[4132]: I1103 16:55:32.433721    4132 server.go:112] Version: v1.7.5
Nov 03 16:55:32 minikube localkube[4132]: F1103 16:55:32.434434    4132 plugins.go:72] Admission plugin "AlwaysAdmit" was registered twice
Nov 03 16:55:32 minikube systemd[1]: localkube.service: Main process exited, code=exited, status=255/n/a
Nov 03 16:55:32 minikube systemd[1]: Failed to start Localkube.
Nov 03 16:55:32 minikube systemd[1]: localkube.service: Unit entered failed state.
Nov 03 16:55:32 minikube systemd[1]: localkube.service: Failed with result 'exit-code'.
Nov 03 16:55:35 minikube systemd[1]: localkube.service: Service hold-off time over, scheduling restart.
Nov 03 16:55:35 minikube systemd[1]: Stopped Localkube.

.kube/config

apiVersion: v1
clusters:
- cluster:
    certificate-authority: C:\Users\user_name\.minikube\ca.crt
    server: https://[fe80::215:5dff:fe5a:6638]:8443
  name: minikube
contexts:
- context:
    cluster: minikube
    user: minikube
  name: minikube
current-context: minikube
kind: Config
preferences: {}
users:
- name: minikube
  user:
    client-certificate: C:\Users\user_name\.minikube\client.crt
    client-key: C:\Users\user_name\.minikube\client.key

Environment:
win10 1709,
hyper-v
minikube version: v0.22.3

I think problem in ipv6 into .kube/config.

@msnelling

This comment has been minimized.

Copy link
Contributor

msnelling commented Nov 8, 2017

I am also getting something similar. I have tried disabling IPv6 on the Hyper-V switch virtual adapter but it doesn't help.

@ponkumarpandian

This comment has been minimized.

Copy link

ponkumarpandian commented Jan 4, 2018

I am also facing the same problem . But instead of hyper-v using virtualbox. Any solution for the same?

@AWoelfel

This comment has been minimized.

Copy link

AWoelfel commented Jan 10, 2018

Same here.
on WIN10 1709 (with hyper-v)
minikube v0.22.2
ISO v0.23.4

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Apr 10, 2018

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented May 10, 2018

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@fejta-bot

This comment has been minimized.

Copy link

fejta-bot commented Jun 9, 2018

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@siddiq-rehman

This comment has been minimized.

Copy link

siddiq-rehman commented Nov 9, 2018

Any work around??

@rdcm

This comment has been minimized.

Copy link
Author

rdcm commented Nov 9, 2018

@siddiq-rehman

Do not use minikube.
Docker for windows can deploy k8s cluster by single click.
version - 18.06.1-ce-win73

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment