Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] WSL2: host.k3d.internal warning & failed kubectl connection at 0.0.0.0 #858

Closed
keyiis opened this issue Nov 15, 2021 · 20 comments · Fixed by #872
Closed

[BUG] WSL2: host.k3d.internal warning & failed kubectl connection at 0.0.0.0 #858

keyiis opened this issue Nov 15, 2021 · 20 comments · Fixed by #872
Assignees
Labels
Milestone

Comments

@keyiis
Copy link

keyiis commented Nov 15, 2021

What did you do

  • How was the cluster created?
root@a15b1a032249:/etc# k3d cluster create
INFO[0000] Prep: Network
INFO[0000] Created network 'k3d-k3s-default'
INFO[0000] Created volume 'k3d-k3s-default-images'
INFO[0000] Starting new tools node...
INFO[0001] Creating node 'k3d-k3s-default-server-0'
INFO[0001] Starting Node 'k3d-k3s-default-tools'
INFO[0001] Creating LoadBalancer 'k3d-k3s-default-serverlb'
INFO[0001] Using the k3d-tools node to gather environment information
WARN[0001] failed to resolve 'host.docker.internal' from inside the k3d-tools node: Failed to read address for 'host.docker.internal' from command output
INFO[0001] HostIP: using network gateway...
INFO[0001] Starting cluster 'k3s-default'
INFO[0001] Starting servers...
INFO[0001] Starting Node 'k3d-k3s-default-server-0'
INFO[0006] Starting agents...
INFO[0006] Starting helpers...
INFO[0006] Starting Node 'k3d-k3s-default-serverlb'
INFO[0012] Injecting '172.25.0.1 host.k3d.internal' into /etc/hosts of all nodes...
INFO[0012] Injecting records for host.k3d.internal and for 2 network members into CoreDNS configmap...
INFO[0012] Cluster 'k3s-default' created successfully!
INFO[0012] You can now use it like this:
kubectl cluster-info
  • What did you do afterwards?
root@a15b1a032249:/etc# kubectl cluster-info

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 0.0.0.0:35545 was refused - did you specify the right host or port?

root@a15b1a032249:/etc# k3d cluster list
NAME          SERVERS   AGENTS   LOADBALANCER
k3s-default   1/1       0/0      true

What did you expect to happen

kubectl cluster-info can work.

Screenshots or terminal output

Which OS & Architecture

My OS is win10,docker run in wsl2,I use dind mode to install k3d,I runing k3d container by docker run ... -v /var/run/docker.sock:/var/run/docker.sock ...,Therefore, K3D and nodes(k3d cluster node) are running at the same level.
1636953419(1)

Which version of k3d

k3d version v5.1.0
k3s version v1.21.5-k3s2 (default)

Which version of docker

# docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  app: Docker App (Docker Inc., v0.9.1-beta3)
  buildx: Build with BuildKit (Docker Inc., v0.6.3-docker)
  scan: Docker Scan (Docker Inc., v0.9.0)

Server:
 Containers: 5
  Running: 3
  Paused: 0
  Stopped: 2
 Images: 13
 Server Version: 20.10.8
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: true
  userxattr: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 1
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: e25210fe30a0a703442421b0f60afac609f950a3
 runc version: v1.0.1-0-g4144b63
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
 Kernel Version: 5.4.72-microsoft-standard-WSL2
 Operating System: Docker Desktop
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 12.28GiB
 Name: docker-desktop
 ID: SGGV:JZ4Z:7COP:ZETH:HBQL:ULGE:AGQJ:C3DU:HXA7:7DO2:C337:MZHX
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No blkio throttle.read_bps_device support
WARNING: No blkio throttle.write_bps_device support
WARNING: No blkio throttle.read_iops_device support
WARNING: No blkio throttle.write_iops_device support
@keyiis keyiis added the bug Something isn't working label Nov 15, 2021
@megan-carey
Copy link

@jadler-goodrx looks like other people are seeing this as well.

@kslacroix
Copy link

kslacroix commented Nov 18, 2021

I'm also seeing this, @keyiis does using host.k3d.internal still resolves to your host? Mine now resolves to some random ip, I wonder if this is related

@keyiis
Copy link
Author

keyiis commented Nov 23, 2021

@kslacroix After I changed kubeconfig host to host.docker.internal, the problem was solved.It would be better if k3d cluster create could recognize host automatically.

@kslacroix
Copy link

@kslacroix After I changed kubeconfig host to host.docker.internal, the problem was solved.It would be better if k3d cluster create could recognize host automatically.

What do you mean by kubeconfig host?

@keyiis
Copy link
Author

keyiis commented Nov 24, 2021

@kslacroix
​​
1637733203(1)

@kslacroix
Copy link

@keyiis Ahh yeah this makes sense, thank you!

For reference, this is where it fails for me...

// GetHostIP returns the routable IP address to be able to access services running on the host system from inside the cluster.
// This depends on the Operating System and the chosen Runtime.
func GetHostIP(ctx context.Context, runtime runtimes.Runtime, cluster *k3d.Cluster) (net.IP, error) {

	rtimeInfo, err := runtime.Info()
	if err != nil {
		return nil, err
	}

	l.Log().Tracef("GOOS: %s / Runtime OS: %s (%s)", goruntime.GOOS, rtimeInfo.OSType, rtimeInfo.OS)

	isDockerDesktop := func(os string) bool {
		return strings.ToLower(os) == "docker desktop"
	}

	// Docker Runtime
	if runtime == runtimes.Docker {

		// Docker (for Desktop) on MacOS or Windows
		if isDockerDesktop(rtimeInfo.OS) {

			toolsNode, err := EnsureToolsNode(ctx, runtime, cluster)
			if err != nil {
				return nil, fmt.Errorf("failed to ensure that k3d-tools node is running to get host IP :%w", err)
			}

			ip, err := resolveHostnameFromInside(ctx, runtime, toolsNode, "host.docker.internal", ResolveHostCmdGetEnt)
			if err == nil {
				return ip, nil
			}

			l.Log().Warnf("failed to resolve 'host.docker.internal' from inside the k3d-tools node: %v", err)

		}

		l.Log().Infof("HostIP: using network gateway...")
		ip, err := runtime.GetHostIP(ctx, cluster.Network.Name)
		if err != nil {
			return nil, fmt.Errorf("runtime failed to get host IP: %w", err)
		}

		return ip, nil

	}

	// Catch all other runtime selections
	return nil, fmt.Errorf("GetHostIP only implemented for the docker runtime")

}

The line ip, err := resolveHostnameFromInside(ctx, runtime, toolsNode, "host.docker.internal", ResolveHostCmdGetEnt returns an error so it falls back to running ip, err := runtime.GetHostIP(ctx, cluster.Network.Name). Which ends up setting host.k3d.internal to the ip address of the cluster itself rather than the host.

@kslacroix
Copy link

The way I fixed the issue on my end (without modifying kubeconfig) was to modify the coredns config-map.
I ran this command first

docker run -it --rm --privileged --pid=host justincormack/nsenter1 /bin/sh -c "ping -c 1 host.docker.internal | grep -m1 -o '[0-9]\+\.[0-9]\+\.[0-9]\+\.[0-9]\+'"

Which returns the hostIP as seen from the docker VM. The IP was 192.168.65.2 for me. I then set the host.k3d.internal field of the coredns config-map to this value.

> kubectl describe cm coredns -n kube-system
[...]
NodeHosts:
----
192.168.65.2 host.k3d.internal         <----- This line used to be 172.21.0.1
172.21.0.2 k3d-registry.dev.lotusmedical.ca
172.21.0.5 k3d-local-serverlb
172.21.0.3 k3d-local-server-0
172.21.0.4 k3d-local-agent-0

@heesuk-ahn
Copy link

heesuk-ahn commented Nov 26, 2021

humm.... I'm having the same problem. It worked fine before... Once I edited kubeconfig directly, I also confirmed that it works.

k3d version
k3d version v5.1.0
k3s version v1.21.5-k3s2 (default)

@swade1987
Copy link

swade1987 commented Nov 26, 2021

@heesuk-ahn what do you have to do to fix this?

When I try to use an image I have loaded into the cluster I get the following error


  Warning  Failed     7s (x2 over 8s)  kubelet            Error: failed to create containerd container: error unpacking image: failed to resolve rootfs: content digest sha256:d7d8becafbcc6f3a83f34deca59ad954f684ed257e043e49d7f29c0baa4e6a6c: not found

@kulukyo
Copy link

kulukyo commented Nov 27, 2021

same issue:

WARN[0001] failed to resolve 'host.docker.internal' from inside the k3d-tools node: Failed to read address for 'host.docker.internal' from command output

@heesuk-ahn
Copy link

https://github.com/rancher/k3d/blob/d78ef48932c1df263deb3203a05230467717ec2e/pkg/client/host.go#L131-L140

Up to version 5.0.1, the above code normally receives the value host.docker.internal 192.168.65.2 and operates.

However, strangely in the later versions
It seems that for scanner.Scan() is not working properly.

@heesuk-ahn
Copy link

@swade1987

In my case, since k3d is executed inside the container, after changing the server host information in kubeconfig to host.docker.intenral instead of 0.0.0.0:port, it worked normally when kubectl was executed.

@iwilltry42 iwilltry42 self-assigned this Nov 29, 2021
@iwilltry42
Copy link
Member

Hi @keyiis , thanks for opening this issue and thanks to the others for providing more input on this!
Some quick overview on the topic:
Before creating the cluster (on Docker for Desktop systems), k3d spins up the k3d-tools container to gather some information about the runtime environment.
One piece here is host.docker.internal, which will be used to inject host.k3d.internal into /etc/hosts of the containers k3d creates and into the CoreDNS configmap, so that you can easily access the host system from within the cluster.
The Warning that you see is purely informative, as if it doesn't work to lookup host.k3d.internal in the tools node, k3d will use the IP of the docker network gateway instead (see the info logs a few lines later).
In fact, the warning is now removed from the code, as it seems to be confusing.

However, all the fuss about host.k3d.internal should be unrelated to your issue of not being able to access the cluster, since host.k3d.internal was never used for this, if I'm not wrong (as it doesn't resolve on the host system).

I think this issue actually covers two different problems, which I will investigate today 👍

@iwilltry42
Copy link
Member

@keyiis (and others!) are you running your commands in the WSL2 terminal or in PowerShell?
The output of docker version would also help here 👍
I gave it a try on Windows 10 with Docker for Desktop (v20.10.8) using the WSL2 backend (k3d v5.1.0).
It worked perfectly from within the WSL2 terminal as well as from PowerShell without having to change anything in the kubeconfig 🤔

k3d v5.1.0 on DfD 20.10.8 with WSL2 backend from WSL2 Terminal
k3d v5.1.0 on DfD 20.10.8 with WSL2 backend from PowerShell

@iwilltry42
Copy link
Member

iwilltry42 commented Nov 29, 2021

Another few things to note here...

Docker for Desktop has at least three possible variations:

  1. WSL2 backend, k3d/docker/kubectl run from WSL2 terminal (Linux)
  2. WSL2 backend, k3d/docker/kubectl run from PowerShell (Win)
  3. Hyper-V backend, k3d/docker/kubectl run from PowerShell (Win)

Testing with WSL2 backend:

  • WSL2 Terminal (Linux): host.docker.internal resolves to 192.168.122.57 (that's the IP of the Windows machine), pinging that IP does not work
  • PowerShell (Win): host.docker.internal resolves to 192.168.122.57, pinging that IP does work
  • In container (alpine:latest, in default bridge network): host.docker.internal resolves to 192.168.65.2 (that's a docker reserved IP), pinging that IP works (also confirmed, that it's actually connecting to the host by running a webserver there and checking the connections -> connection works in both cases: webserver running on Windows or inside the WSL2 environment)
  • In container (same setup + --add-host=host.foo.internal=host-gateway): host.foo.internal resolves to the same as above (and connection works)

Other cases to consider:

  • Using remote docker daemons via ssh/tcp/etc. (host.docker.internal as looked up on the host system may not work here e.g. for the kubeconfig connection string).

Potential solution: making use of the host-gateway special keyword instead

@iwilltry42 iwilltry42 changed the title [BUG] failed to resolve 'host.docker.internal' from inside the k3d-tools node [BUG] WSL2: host.k3d.internal warning & failed kubectl connection at 0.0.0.0 Nov 29, 2021
@heesuk-ahn
Copy link

heesuk-ahn commented Nov 29, 2021

@iwilltry42 hi,

My development environment is used by installing k3d inside the alpine container and connecting to the docker daemon of the host (macOs).

In this environment, if I change 0.0.0.0:port -> host.docker.internal:port in kubeconfig, I can query kube apiserver with kubectl.

And my docker version is as below.

❯ docker version
Client:
 Version:           20.10.11
 API version:       1.41
 Go version:        go1.16.10
 Git commit:        dea9396e184290f638ea873c76db7c80efd5a1d2
 Built:             Fri Nov 19 03:42:54 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          20.10.7
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       b0f5bc3
  Built:            Wed Jun  2 11:54:58 2021
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.6
  GitCommit:        d71fcd7d8303cbf684402823e425e9dd2e99285d
 runc:
  Version:          1.0.0-rc95
  GitCommit:        b9ee9c6314599f1b4a7f497e1f1f856fe433d3b7
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

@kslacroix
Copy link

@iwilltry42 You're right, I misunderstood the issue at first. My problem is different from @keyiis especially since I'm on Macos. I was seeing these logs also

WARN[0001] failed to resolve 'host.docker.internal' from inside the k3d-tools node: Failed to read address for 'host.docker.internal' from command output
INFO[0001] HostIP: using network gateway...

which made me think we had the same issue. Should I open a new issue or will your fix address my issue also?

@iwilltry42
Copy link
Member

@heesuk-ahn

installing k3d inside the alpine container and connecting to the docker daemon of the host

Your docker client (CLI) and kubectl are also running inside that alpine container?
In that case, obviously 0.0.0.0 won't work, as the port is mapped to the host.
But also, it's not easy to detect if we can use host.docker.internal, as both client and server report linux and usually in this combination, host.docker.internal is not present.
What you could do is simply k3d cluster create --api-port host.docker.internal:<someport> upon creation.

@kslacroix

Should I open a new issue or will your fix address my issue also?

It will be addressed by the linked PR 👍

@heesuk-ahn
Copy link

@iwilltry42

thanks your advise :)

I tried k3d cluster create --api-port host.docker.internal:<someport> but I got a error message

ERRO[0017] Failed Cluster Start: Failed to add one or more helper nodes: runtime failed to start node 'k3d-k3s-default-serverlb': docker failed to start container for node 'k3d-k3s-default-serverlb': Error response from daemon: Ports are not available: listen tcp 192.168.65.2:8080: bind: can't assign requested address 

How can I possibly solve this error? 🤔

@iwilltry42
Copy link
Member

iwilltry42 commented Nov 30, 2021

@heesuk-ahn ah yes, didn't think of that... host.docker.internal resolves to different IPs inside and outside of containers.
As you're running k3d inside a container, it resolves to some docker reserved IP, where you cannot map ports to 🤔

Instead of relying on the kubeconfig output by k3d upon k3d cluster create, you can do it manually via k3d kubeconfig get f | sed 's/0\.0\.0\.0/host.k3d.internal/' > ~/.kube/config.
This will overwrite your default kubeconfig, so you may do something like

k3d kubeconfig get mycluster | sed 's/0\.0\.0\.0/host.k3d.internal/' > /tmp/k3d-mycluster.yaml && \ 
KUBECONFIG=$KUBECONFIG:/tmp/k3d-mycluster.yaml kubectl config view --merge --flatten > $HOME/.kube/config

Note: I did not test this, so better backup your kubeconfig first!
This will merge the new kubeconfig with your existing default kubeconfig and so k3d can remove the respective context upon cluster deletion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants