Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rke2 requires hostnames to be resolvable per dns #979

Closed
Sicaine opened this issue May 12, 2021 · 1 comment
Closed

rke2 requires hostnames to be resolvable per dns #979

Sicaine opened this issue May 12, 2021 · 1 comment
Assignees

Comments

@Sicaine
Copy link

Sicaine commented May 12, 2021

Describe the bug:
When installing rke2 in an environment where the hostname of a node (take a worker node called "work" as an example) is not resolvable by a controller node, kubectl logs and kubectl port-forward doesn't work.

Basically the controller node tries to resolve "work" into an ip address which fails IF the environment (network) is not configured to resolve hostnames. This hit me when configuring rke2 in hetzner cloud.

This could be taken as a default requirement BUT this is not an issue when installing kubernetes with kubeadm.

There was an helm issue helm/helm#1455 which itself references a kubernetes bug kubernetes/kubernetes#22770 "Unable to resolve hostname using kubectl logs"
The issue was fixed in kubernetes/kubernetes#33718 and this make me believe that rke2 is doing something different/broken in this regard.

Potential solutions:

  • Configure a local dnsmsaq and tell it the ips to hostnames of your nodes (needs manual setup/scripting)
  • Configure/fix local dns in your network by potentially operating or fixing an dhcp server and dns server
  • Change kubelet-preferred-address-types to "InternalIP,Hostname,ExternalIP" on the api server (this exists as extra args)
    In config.yaml:
    kube-apiserver-arg:
    • kubelet-preferred-address-types=InternalIP,Hostname,ExternalIP

Original bug #637 my original bug #975

@ShylajaDevadiga
Copy link
Contributor

Validated on rke2 version v1.20.7-rc1+rke2r1 v1.19.11-rc1+rke2r1 v1.18.19-rc1+rke2r1
On a two node cluster

$ kubectl get nodes
NAME               STATUS   ROLES                       AGE   VERSION
ip-172-31-14-102   Ready    <none>                      51m   v1.20.7-rc1+rke2r1
ip-172-31-4-235    Ready    control-plane,etcd,master   85m   v1.20.7-rc1+rke2r1

$ nslookup ip-172-31-14-102
Server:		127.0.0.53
Address:	127.0.0.53#53

Non-authoritative answer:
Name:	ip-172-31-14-102.us-east-2.compute.internal
Address: 172.31.14.102

kube-apiserver args have the required arg

$ ps aux |grep api
root       30663  6.5 13.7 1304360 552148 ?      Ssl  19:25   2:24 kube-apiserver --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname

logs are found for pods running on agent

ubuntu@ip-172-31-4-235:~$ kubectl get pods -A -o wide |grep mongo
default       mongo-75f59d57f4-5z82x                               1/1     Running     0          41m   10.42.1.2       ip-172-31-14-102   <none>           <none>
ubuntu@ip-172-31-4-235:~$ kubectl logs mongo-75f59d57f4-5z82x
2021-05-21T20:10:42.819+0000 I  CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
...

port-forward works as expected

kubectl port-forward mongo-75f59d57f4-5z82x 28015:27017
Forwarding from 127.0.0.1:28015 -> 27017
Forwarding from [::1]:28015 -> 27017
...

> db.runCommand( { ping: 1 } )
{ ok: 1 }

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants