Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DNS failure trying to build docker image in hello-minikube tutorial #1442

Closed
jabley opened this issue May 3, 2017 · 24 comments
Closed

DNS failure trying to build docker image in hello-minikube tutorial #1442

jabley opened this issue May 3, 2017 · 24 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@jabley
Copy link

jabley commented May 3, 2017

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Moved from kubernetes/website#3596

Minikube version (use minikube version):

> minikube version
minikube version: v0.18.0

Environment:

  • OS (e.g. from /etc/os-release): MacOS 10.12.4
  • VM Driver (e.g. cat ~/.minikube/machines/minikube/config.json | grep DriverName): "DriverName": "xhyve",
  • ISO version (e.g. cat ~/.minikube/machines/minikube/config.json | grep -i ISO or minikube ssh cat /etc/VERSION): "Boot2DockerURL": "file:///Users/jabley/.minikube/cache/iso/minikube-v0.18.0.iso",
  • Install tools: Homebrew
  • Others:
> kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T22:51:55Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"dirty", BuildDate:"2017-04-07T20:46:46Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}

What happened:

Trying to follow the hello-minikube tutorial:

> > minikube start --vm-driver=xhyve
Starting local Kubernetes cluster...
Starting VM...
SSH-ing files into VM...
Setting up certs...
Starting cluster components...
Connecting to cluster...
Setting up kubeconfig...
Kubectl is now configured to use the cluster.
> kubectl config use-context minikube
Switched to context "minikube".
> > kubectl cluster-info
Kubernetes master is running at https://192.168.64.5:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
> eval $(minikube docker-env)
> docker build -t hello-node:v1 .
Sending build context to Docker daemon 3.072 kB
Step 1 : FROM node:6.9.2
Pulling repository docker.io/library/node
Error while pulling image: Get https://index.docker.io/v1/repositories/library/node/images: dial tcp: lookup index.docker.io on 192.168.64.1:53: read udp 192.168.64.5:48734->192.168.64.1:53: read: connection refused

What you expected to happen:

The docker build to succeed.

How to reproduce it (as minimally and precisely as possible):

Anything else do we need to know:

@aaron-prindle
Copy link
Contributor

In order to pull images you have to make sure that the DNS pod is running in your cluster. To do this you can run kubectl get po --all-namespaces and verify that all of the pods are in the Running state.

@jabley
Copy link
Author

jabley commented May 6, 2017

Thanks @aaron-prindle.

> kubectl get po --all-namespaces
NAMESPACE     NAME                          READY     STATUS              RESTARTS   AGE
kube-system   kube-addon-manager-minikube   0/1       ContainerCreating   0          1m

> kubectl describe pod kube-addon-manager-minikube --namespace kube-system
Name:		kube-addon-manager-minikube
Namespace:	kube-system
Node:		minikube/192.168.64.7
Start Time:	Sat, 06 May 2017 17:46:07 +0100
Labels:		component=kube-addon-manager
		kubernetes.io/minikube-addons=addon-manager
		version=v6.4-alpha.1
Annotations:	kubernetes.io/config.hash=4fb35b6f38517771d5bfb1cffb784d97
		kubernetes.io/config.mirror=4fb35b6f38517771d5bfb1cffb784d97
		kubernetes.io/config.seen=2017-05-06T16:46:02.325082498Z
		kubernetes.io/config.source=file
Status:		Pending
IP:		192.168.64.7
Controllers:	<none>
Containers:
  kube-addon-manager:
    Container ID:	
    Image:		gcr.io/google-containers/kube-addon-manager:v6.4-alpha.1
    Image ID:		
    Port:		
    State:		Waiting
      Reason:		ContainerCreating
    Ready:		False
    Restart Count:	0
    Requests:
      cpu:		5m
      memory:		50Mi
    Environment:	<none>
    Mounts:
      /etc/kubernetes/ from addons (ro)
Conditions:
  Type		Status
  Initialized 	True 
  Ready 	False 
  PodScheduled 	True 
Volumes:
  addons:
    Type:	HostPath (bare host directory volume)
    Path:	/etc/kubernetes/
QoS Class:	Burstable
Node-Selectors:	<none>
Tolerations:	=:Exists:NoExecute
Events:		<none>

> kubectl logs kube-addon-manager-minikube --namespace kube-system
Error from server (BadRequest): container "kube-addon-manager" in pod "kube-addon-manager-minikube" is waiting to start: ContainerCreating

@jabley
Copy link
Author

jabley commented May 8, 2017

I've managed to complete the hello-minikube tutorial using VirtualBox, rather than xhyve. Not sure if that helps?

@antonyoneill
Copy link

antonyoneill commented May 19, 2017

I have the same issue as @jabley following the tutorial https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/

➜  uname -a
Darwin MacBook-Pro.local 15.5.0 Darwin Kernel Version 15.5.0: Tue Apr 19 18:36:36 PDT 2016; root:xnu-3248.50.21~8/RELEASE_X86_64 x86_64
➜  minikube version
minikube version: v0.19.0
➜  cat ~/.minikube/machines/minikube/config.json | grep DriverName
    "DriverName": "xhyve",
➜  cat ~/.minikube/machines/minikube/config.json | grep -i ISO
        "Boot2DockerURL": "file:///Users/antonyoneill/.minikube/cache/iso/minikube-v0.18.0.iso",
➜  kubectl version
Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.3", GitCommit:"0480917b552be33e2dba47386e51decb1a211df6", GitTreeState:"clean", BuildDate:"2017-05-10T23:28:44Z", GoVersion:"go1.8.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.0", GitCommit:"fff5156092b56e6bd60fff75aad4dc9de6b6ef37", GitTreeState:"clean", BuildDate:"2017-05-09T23:22:45Z", GoVersion:"go1.7.3", Compiler:"gc", Platform:"linux/amd64"}

Here's the output of the status

➜  minikube ssh systemctl status localkube
● localkube.service - Localkube
   Loaded: loaded (/lib/systemd/system/localkube.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-05-19 12:43:42 UTC; 8min ago
     Docs: https://github.com/kubernetes/minikube/tree/master/pkg/localkube
 Main PID: 5789 (localkube)
    Tasks: 15 (limit: 4915)
   Memory: 129.1M
      CPU: 29.700s
   CGroup: /system.slice/localkube.service
           ├─5789 /usr/local/bin/localkube --generate-certs=false --logtostderr=true --enable-dns=false --node-ip=192.168.64.2
           └─5869 journalctl -k -f

May 19 12:51:48 minikube localkube[5789]: I0519 12:51:48.082921    5789 kuberuntime_manager.go:458] Container {Name:kube-addon-manager Image:gcr.io/google-containers/kube-addon-manager:v6.4-beta.1 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[cpu:{i:{value:5 scale:-3} d:{Dec:<nil>} s:5m Format:DecimalSI} memory:{i:{value:52428800 scale:0} d:{Dec:<nil>} s:50Mi Format:BinarySI}]} VolumeMounts:[{Name:addons ReadOnly:true MountPath:/etc/kubernetes/ SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 19 12:51:48 minikube localkube[5789]: E0519 12:51:48.091315    5789 remote_runtime.go:86] RunPodSandbox from runtime service failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.2:52140->192.168.64.1:53: read: connection refused
May 19 12:51:48 minikube localkube[5789]: E0519 12:51:48.091370    5789 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-addon-manager-minikube_kube-system(8538d869917f857f9d157e66b059d05b)" failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.2:52140->192.168.64.1:53: read: connection refused
May 19 12:51:48 minikube localkube[5789]: E0519 12:51:48.091385    5789 kuberuntime_manager.go:619] createPodSandbox for pod "kube-addon-manager-minikube_kube-system(8538d869917f857f9d157e66b059d05b)" failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.2:52140->192.168.64.1:53: read: connection refused
May 19 12:51:48 minikube localkube[5789]: E0519 12:51:48.091413    5789 pod_workers.go:182] Error syncing pod 8538d869917f857f9d157e66b059d05b ("kube-addon-manager-minikube_kube-system(8538d869917f857f9d157e66b059d05b)"), skipping: failed to "CreatePodSandbox" for "kube-addon-manager-minikube_kube-system(8538d869917f857f9d157e66b059d05b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-addon-manager-minikube_kube-system(8538d869917f857f9d157e66b059d05b)\" failed: rpc error: code = 2 desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.2:52140->192.168.64.1:53: read: connection refused"
May 19 12:52:03 minikube localkube[5789]: I0519 12:52:03.082706    5789 kuberuntime_manager.go:458] Container {Name:kube-addon-manager Image:gcr.io/google-containers/kube-addon-manager:v6.4-beta.1 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[memory:{i:{value:52428800 scale:0} d:{Dec:<nil>} s:50Mi Format:BinarySI} cpu:{i:{value:5 scale:-3} d:{Dec:<nil>} s:5m Format:DecimalSI}]} VolumeMounts:[{Name:addons ReadOnly:true MountPath:/etc/kubernetes/ SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
May 19 12:52:03 minikube localkube[5789]: E0519 12:52:03.092503    5789 remote_runtime.go:86] RunPodSandbox from runtime service failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.2:35306->192.168.64.1:53: read: connection refused
May 19 12:52:03 minikube localkube[5789]: E0519 12:52:03.092675    5789 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-addon-manager-minikube_kube-system(8538d869917f857f9d157e66b059d05b)" failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.2:35306->192.168.64.1:53: read: connection refused
May 19 12:52:03 minikube localkube[5789]: E0519 12:52:03.092801    5789 kuberuntime_manager.go:619] createPodSandbox for pod "kube-addon-manager-minikube_kube-system(8538d869917f857f9d157e66b059d05b)" failed: rpc error: code = 2 desc = unable to pull sandbox image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.2:35306->192.168.64.1:53: read: connection refused
May 19 12:52:03 minikube localkube[5789]: E0519 12:52:03.092928    5789 pod_workers.go:182] Error syncing pod 8538d869917f857f9d157e66b059d05b ("kube-addon-manager-minikube_kube-system(8538d869917f857f9d157e66b059d05b)"), skipping: failed to "CreatePodSandbox" for "kube-addon-manager-minikube_kube-system(8538d869917f857f9d157e66b059d05b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-addon-manager-minikube_kube-system(8538d869917f857f9d157e66b059d05b)\" failed: rpc error: code = 2 desc = unable to pull sandbox image \"gcr.io/google_containers/pause-amd64:3.0\": Error response from daemon: Get https://gcr.io/v1/_ping: dial tcp: lookup gcr.io on 192.168.64.1:53: read udp 192.168.64.2:35306->192.168.64.1:53: read: connection refused"

Pod Status:

➜  kubectl get po --all-namespaces
NAMESPACE     NAME                          READY     STATUS              RESTARTS   AGE
kube-system   kube-addon-manager-minikube   0/1       ContainerCreating   0          1h

Pod detail

➜  kubectl describe pod kube-addon-manager-minikube --namespace kube-system
Name:		kube-addon-manager-minikube
Namespace:	kube-system
Node:		minikube/192.168.64.2
Start Time:	Fri, 19 May 2017 12:43:24 +0100
Labels:		component=kube-addon-manager
		kubernetes.io/minikube-addons=addon-manager
		version=v6.4
Annotations:	kubernetes.io/config.hash=8538d869917f857f9d157e66b059d05b
		kubernetes.io/config.mirror=8538d869917f857f9d157e66b059d05b
		kubernetes.io/config.seen=2017-05-19T11:43:19.173621978Z
		kubernetes.io/config.source=file
Status:		Pending
IP:		192.168.64.2
Controllers:	<none>
Containers:
  kube-addon-manager:
    Container ID:
    Image:		gcr.io/google-containers/kube-addon-manager:v6.4-beta.1
    Image ID:
    Port:
    State:		Waiting
      Reason:		ContainerCreating
    Ready:		False
    Restart Count:	0
    Requests:
      cpu:		5m
      memory:		50Mi
    Environment:	<none>
    Mounts:
      /etc/kubernetes/ from addons (ro)
Conditions:
  Type		Status
  Initialized 	True
  Ready 	False
  PodScheduled 	True
Volumes:
  addons:
    Type:	HostPath (bare host directory volume)
    Path:	/etc/kubernetes/
QoS Class:	Burstable
Node-Selectors:	<none>
Tolerations:	=:Exists:NoExecute
Events:		<none>

I don't have a firewall running on this machine, and I can access the service from my mac

➜  curl -v https://gcr.io/v1/_ping
*   Trying 64.233.167.82...
* Connected to gcr.io (64.233.167.82) port 443 (#0)
* TLS 1.2 connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate: *.googlecode.com
* Server certificate: Google Internet Authority G2
* Server certificate: GeoTrust Global CA
> GET /v1/_ping HTTP/1.1
> Host: gcr.io
> User-Agent: curl/7.43.0
> Accept: */*
>
< HTTP/1.1 200 OK
< X-Docker-Registry-Version: 0.0
< X-Docker-Registry-Standalone: false
< Content-Type: text/plain
< Date: Fri, 19 May 2017 12:56:12 GMT
< Server: Docker Registry
< Cache-Control: private
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: SAMEORIGIN
< Alt-Svc: quic=":443"; ma=2592000; v="37,36,35"
< Accept-Ranges: none
< Vary: Accept-Encoding
< Transfer-Encoding: chunked
<
* Connection #0 to host gcr.io left intact
true%

@antonyoneill
Copy link

antonyoneill commented May 19, 2017

I'm not familiar with minikube (starting off with kubernetes) so I'm not sure if there's meant to be a DNS service at 192.168.64.1 or if the configuration isn't automatically picking up my host DNS settings..

➜  minikube ssh
$ nslookup gcr.io
Server:    192.168.64.1
Address 1: 192.168.64.1

nslookup: can't resolve 'gcr.io'
$ nslookup gcr.io 8.8.8.8
Server:    8.8.8.8
Address 1: 8.8.8.8 google-public-dns-a.google.com

Name:      gcr.io
Address 1: 64.233.166.82 wm-in-f82.1e100.net
Address 2: 2a00:1450:400c:c09::52 wm-in-x52.1e100.net

I feel like editing the /etc/resolv.conf file is a bit of a cheat..

➜  minikube ssh
$ cat /etc/resolv.conf
# This file is managed by systemd-resolved(8). Do not edit.
#
# This is a dynamic resolv.conf file for connecting local clients directly to
# all known DNS servers.
#
# Third party programs must not access this file directly, but only through the
# symlink at /etc/resolv.conf. To manage resolv.conf(5) in a different way,
# replace this symlink by a static file or a different symlink.
#
# See systemd-resolved.service(8) for details about the supported modes of
# operation for /etc/resolv.conf.

nameserver 8.8.8.8
nameserver 8.8.4.4
$ nslookup gcr.io
Server:    8.8.8.8
Address 1: 8.8.8.8 google-public-dns-a.google.com

Name:      gcr.io
Address 1: 64.233.166.82 wm-in-f82.1e100.net
Address 2: 2a00:1450:400c:c09::52 wm-in-x52.1e100.net

Edit: 😂 That change isn't persisted once I exit the ssh process..

@ceridwen
Copy link

ceridwen commented Jun 1, 2017

I also encountered this DNS failure following the Hello Minikube tutorial using the xhyve driver. Like @jabley, when I switched to the virtualbox driver properly, I could build the Docker image. My MacOS X version is 10.12.5, minikube version is v0.19.1, and other versions are the same. The minikube DNS pod doesn't seem to be running with the xhyve driver.

kubectl get po --all-namespaces
NAMESPACE     NAME                          READY     STATUS              RESTARTS   AGE
kube-system   kube-addon-manager-minikube   0/1       ContainerCreating   0          16h

With the VirtualBox driver:

kubectl get po --all-namespaces
NAMESPACE     NAME                          READY     STATUS    RESTARTS   AGE
kube-system   kube-addon-manager-minikube   1/1       Running   0          8m
kube-system   kube-dns-196007617-56lg0      3/3       Running   1          8m
kube-system   kubernetes-dashboard-jtw8t    1/1       Running   0          8m

@kamusis
Copy link

kamusis commented Aug 15, 2017

This should be the xhyve driver issue.

minikube delete -> minikube start (don't use --vm-driver=xhyve, by default minikube will start by using VirtualBox driver), then everything is fine.

$ cat ~/.minikube/machines/minikube/config.json | grep DriverName
    "DriverName": "virtualbox",

$ kubectl get po --all-namespaces
NAMESPACE     NAME                          READY     STATUS    RESTARTS   AGE
kube-system   kube-addon-manager-minikube   1/1       Running   0          3m
kube-system   kube-dns-910330662-sp9k0      3/3       Running   0          3m
kube-system   kubernetes-dashboard-xhm5x    1/1       Running   0          3m

$ minikube ssh
$ nslookup gcr.io
Server:    10.0.2.3
Address 1: 10.0.2.3

Name:      gcr.io
Address 1: 74.125.23.82 tg-in-f82.1e100.net

@ttiurani
Copy link

ttiurani commented Sep 1, 2017

I have kube-dns problems, but they are slightly different. I do get the pods to start with xhyve:

# kubectl -n kube-system get pods
NAME                          READY     STATUS    RESTARTS   AGE
kube-addon-manager-minikube   1/1       Running   0          9m
kube-dns-910330662-16bvw      3/3       Running   0          8m
kubernetes-dashboard-vbn9m    1/1       Running   0          8m

But cluster-info doesn't show kube-dns info

# kubectl cluster-info
Kubernetes master is running at https://192.168.64.26:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

and I can't get kubernetes nor kubernetes.default.svc.cluster.local to resolve anywhere.

# kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.5", GitCommit:"17d7182a7ccbb167074be7a87f0a68bd00d58d97", GitTreeState:"clean", BuildDate:"2017-08-31T09:14:02Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.0", GitCommit:"d3ada0119e776222f11ec7945e6d860061339aad", GitTreeState:"clean", BuildDate:"2017-07-26T00:12:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
# minikube version
minikube version: v0.21.0

I also tried to use VirtualBox but for that as well kubectl cluster-info does not show info about kubedns, same as above.

If I try to minikube ssh, in there as well:

nslookup kubernetes.default.svc.cluster.local 10.0.0.1
Server:    10.0.0.1
Address 1: 10.0.0.1

nslookup: can't resolve 'kubernetes.default.svc.cluster.local'

Same happens for just kubernetes. Shouldn't that work?

@kamusis
Copy link

kamusis commented Sep 1, 2017

I think maybe is the expected result when we run "kubectl cluster-info" on minikube? mine is the same.

$ kubectl cluster-info
Kubernetes master is running at https://192.168.99.100:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

If I run the command on Google Container Engine, that's the real kubernetes cluster implementation. All displayed.

kamus@my-microservices-177316:~$ kubectl cluster-info
Kubernetes master is running at https://104.199.208.92
GLBCDefaultBackend is running at https://104.199.208.92/api/v1/namespaces/kube-system/services/default-http-backend/proxy
Heapster is running at https://104.199.208.92/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://104.199.208.92/api/v1/namespaces/kube-system/services/kube-dns/proxy
kubernetes-dashboard is running at https://104.199.208.92/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

@ttiurani
Copy link

ttiurani commented Sep 1, 2017

Thanks @kamusis , when you do minikube ssh can you resolve the kubernetes domain name? Or the full kubernetes.default.svc.cluster.local?

It's also possible I don't understand how and where and if the resolving should work with minikube.

Edit. Also this doesn't work:

# kubectl -n kube-system exec -it kube-dns-910330662-1n64f -c kubedns -- nslookup kubernetes.default.svc.cluster.local localhost
Server:    127.0.0.1
Address 1: 127.0.0.1 localhost

nslookup: can't resolve 'kubernetes.default.svc.cluster.local': Name does not resolve

which I got from kubernetes/kubernetes#16836 (comment) and does seem to work there.

@kamusis
Copy link

kamusis commented Sep 4, 2017

@ttiurani
I can't also.

$ minikube ssh
$ nslookup kubernetes.default.svc.cluster.local localhost
Server:    127.0.0.1
Address 1: 127.0.0.1 localhost

nslookup: can't resolve 'kubernetes.default.svc.cluster.local'

$ nslookup kubernetes.default.svc.cluster.local
Server:    10.0.2.3
Address 1: 10.0.2.3

nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
$ nslookup kubernetes.default
Server:    10.0.2.3
Address 1: 10.0.2.3

nslookup: can't resolve 'kubernetes.default'

@nakamorichi
Copy link

Having the same issue as @ttiurani . Can't get local image registry working with xhyve (haven't tried with VirtualBox) due to the DNS issue...

Has anyone solved this?

@nakamorichi
Copy link

nakamorichi commented Nov 17, 2017

Just found out why pulling the image failed. The default imagePullPolicy "Defaults to Always if :latest tag is specified, or IfNotPresent otherwise" (API ref). Always does not work if the image exists only locally. Therefore, one has to specify imagePullPolicy in the pod definition either as Never or IfNotPresent when using local images with :latest tag.

@shashanktomar
Copy link

If someone is still facing this issue on virtualbox, it's a dns resolution issue. It has noting to do with minikube and can be resolved by doing:

  • minikube stop
  • VBoxManage modifyvm "minikube" --natdnshostresolver1 on
  • minikube start

The full details are mentioned here

@daxhuiberts
Copy link

I had the exact same issue with the xhyve driver where the docker build would fail and where the nslookup within the minikube ssh session would fail as well. Starting minikube with the virtualbox driver works succesfully.

I also had dnsmasq running locally to configure anything .dev to resolve to localhost. After disabling and removing dnsmasq locally and starting minikube with xhyve again everything works succesfully.

Maybe this info helps?

@takashisite
Copy link

Same exact issue as @daxhuiberts .In my case, dns failed when hyperkit driver and dnsmasq running. Work normally when disabled dnsmasq and restart minikube.

@gautamkpai
Copy link

Faced the same issue with both hyperkit and xhyve drivers.

Looks like dnsmasq was refusing connections on the Host IP address(in this case 192.168.64.1) assigned by minikube.

➜  ~ minikube ssh
$ docker search alpine
Error response from daemon: Get https://index.docker.io/v1/search?q=alpine&n=25: dial tcp: 
lookup index.docker.io on 192.168.64.1:53: read udp 192.168.64.30:33718->192.168.64.1:53: 
read: connection refused

Configuring dnsmasq to specifically listen on the minikube host IP address did the trick for me. Add the following 2 lines in your dnsmasq.conf.

➜  ~ vim /usr/local/etc/dnsmasq.conf
# If you want dnsmasq to listen for DHCP and DNS requests on
#  by address (remember to include 127.0.0.1 if you use this.)
# Repeat the line for more than one interface.
#listen-address=
listen-address=192.168.64.1
listen-address=127.0.0.1

You will need to restart your macbook after updating dnsmasq config.
(Strangely enough, only restarting minikube VM and dnsmasq did not work.)

@vhosakot
Copy link

vhosakot commented Jan 31, 2018

I saw the same error too when I ran docker build -t hello-node:v1 . in the tutorial https://kubernetes.io/docs/tutorials/stateless-application/hello-minikube/#create-a-docker-container-image on Mac OS X Sierra (10.12.6), Docker version 17.12.0-ce, minikube version v0.25.0, xhyve version 0.2.0 and kubernetes version v1.9.0:

$ docker build -t hello-node:v1 .
Sending build context to Docker daemon  3.072kB
Step 1/4 : FROM node:6.9.2
Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.3:37095->192.168.64.1:53: read: connection refused

The steps mentioned by @kamusis here in #1442 (comment) worked for me.

Switching to virtualbox fixed this issue for me.

I stopped minikube, deleted it, started it without --vm-driver=xhyve (minikube uses virtualbox driver by default), and then docker build -t hello-node:v1 . worked fine without errors:

minikube stop
eval $(minikube docker-env -u)
minikube delete

minikube start (without --vm-driver=xhyve)

eval $(minikube docker-env)
docker build -t hello-node:v1 .

$ docker images | grep hello
hello-node                                    v1                  9cc51aa82a30        39 seconds ago      655MB

I do see the kube-dns pod running:

$ kubectl get pod --all-namespaces
NAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE
kube-system   kube-addon-manager-minikube             1/1       Running   0          1m
kube-system   kube-dns-54cccfbdf8-vmvjm               3/3       Running   0          1m
kube-system   kubernetes-dashboard-77d8b98585-5mbcf   1/1       Running   0          1m
kube-system   storage-provisioner                     1/1       Running   0          1m

This issue does look like an xhyve issue not seen with virtualbox. My virtualbox version is 5.2.6r120293:

$ VBoxManage --version
5.2.6r120293

@jabley
Copy link
Author

jabley commented Feb 19, 2018

Uninstalling dnsmasq let me complete the tutorial using xhyve.

jabley added a commit to jabley/our-boxen that referenced this issue Feb 28, 2018
The .dev TLD is owned by Google and has been added to the HTTP Strict
Transport Protocol preload list.

See https://ma.ttias.be/chrome-force-dev-domains-https-via-preloaded-hsts/

I don't use the .dev thing anyway, or any dnsmasq configuration. I
suspect dnsmasq is causing kubernetes/minikube#1442
so I'm removing it.
@yellowred
Copy link

uninstalling dnsmasq works for me

@nbering
Copy link

nbering commented Mar 10, 2018

I had the exact same issues with the hyperkit driver, under macOS 10.13. I already had dnsmasq installed via homebrew from a project I did a couple of years ago. @gautamkpai's solution worked for me.

Between having dnsmasq installed, and the little snitch firewall, I wonder if anyone else encounters issues, or if it just happens to be this convergence of existing software on the host system.

Edit: After considering whether of not I actually needed dnsmasq, I opted to remove it. That worked fine, so both solutions seem to work.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 8, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 8, 2018
@k8s-ci-robot k8s-ci-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 8, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests