Skip to content

Service DNS not available #12022

@Vishva066

Description

@Vishva066

What happened?

I reset the cluster using the reset playbook and again tried to create it. The cluster creation was success but the service DNS communication is not working.

When I run with plain kubeadm it is working without the need of kubespray.

These are the errors I got from nodelocaldns

[ERROR] plugin/errors: 2 5740068231624001434.5564057510311608724.in-addr.arpa. HINFO: dial tcp 10.233.0.3:53: i/o timeout
[ERROR] plugin/errors: 2 6533522784009999285.4997061034958439446.cluster.local. HINFO: dial tcp 10.233.0.3:53: i/o timeout
[ERROR] plugin/errors: 2 8792848527012497519.4094129199671639269.ip6.arpa. HINFO: dial tcp 10.233.0.3:53: i/o timeout
[ERROR] plugin/errors: 2 6533522784009999285.4997061034958439446.cluster.local. HINFO: dial tcp 10.233.0.3:53: i/o timeout
[ERROR] plugin/errors: 2 5740068231624001434.5564057510311608724.in-addr.arpa. HINFO: dial tcp 10.233.0.3:53: i/o timeout

[ERROR] plugin/errors: 2 raw.githubusercontent.com.besu.svc.cluster.local. A: dial tcp 10.233.0.3:53: i/o timeout
[ERROR] plugin/errors: 2 grafana.com.besu.svc.cluster.local. A: dial tcp 10.233.0.3:53: i/o timeout
[ERROR] plugin/errors: 2 grafana.com.besu.svc.cluster.local. AAAA: dial tcp 10.233.0.3:53: i/o timeout

Here the nodelocal DNS is not connecting to coredns.

I tried to manually connect to the coredns. But the pod/node where corends is running is getting connected others are not getting connected

What did you expect to happen?

I expect the service to be called using DNS names. like kubernetes.default.svc.cluster.local. I prefer using these names. But these are not working

How can we reproduce it (as minimally and precisely as possible)?

Create the default configuration and reset the cluster. AFter that we will face this issue.

OS

Ubuntu 22.04.
On-Premise Deploymen.
Non-root user with sudo access.

Version of Ansible

ansible [core 2.16.14]

Version of Python

Python 3.10.12

Version of Kubespray (commit)

7c61189

Network plugin used

calico

Full inventory with variables

ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root -v --private-key=~/.ssh/id_rsa cluster.yml

Command used to invoke ansible

ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root -v --private-key=~/.ssh/id_rsa cluster.yml

Output of ansible run

Got success as output

Anything else we need to know

I also tried to disable nodelocaldns but also faced the same problem.

Metadata

Metadata

Assignees

No one assigned

    Labels

    kind/bugCategorizes issue or PR as related to a bug.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions