Open
Description
tl;dr
I deployed the pihole from the Helm chart. The service configuration looks like this:
# create a kubernetes service and expose
# port 53 outside of cluster on the local network
serviceDns:
type: LoadBalancer
Everything went fine. The service was also created correctly with LoadBalancer
type, and external traffic policy set to local
.
I updated the DNS settings in DHCP of my router (UDM SE, but I don't think it matters?) to two nodes of my k3s cluster. All the resolution is performed correctly, but all I can see in the Query Log in pihole is an internal k8s IP of ServiceLB pod on the node I use for the DNS.
Is this something that I can fix while sticking to ServiceLB? It doesn't seem like it has a ton of configuration options. Or should I install MetalLB instead already?
commands
kubectl describe service pihole-dns-udp -n pihole
Name: pihole-dns-udp
Namespace: pihole
Labels: app=pihole
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=pihole
chart=pihole-2.27.0
heritage=Helm
release=pihole
Annotations: meta.helm.sh/release-name: pihole
meta.helm.sh/release-namespace: pihole
Selector: app=pihole,release=pihole
Type: LoadBalancer
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.43.229.111
IPs: 10.43.229.111
LoadBalancer Ingress: 10.10.93.195
Port: dns-udp 53/UDP
TargetPort: dns-udp/UDP
NodePort: dns-udp 32230/UDP
Endpoints: 10.42.3.38:53
Session Affinity: None
External Traffic Policy: Local
HealthCheck NodePort: 30718
Events: <none>
kubectl describe pod svclb-pihole-dns-udp-69336a47-rpggz -n kube-system
Name: svclb-pihole-dns-udp-69336a47-rpggz
Namespace: kube-system
Priority: 2000001000
Priority Class Name: system-node-critical
Service Account: svclb
Node: azathoth/10.10.72.1
Start Time: Fri, 10 Jan 2025 13:44:32 -0800
Labels: app=svclb-pihole-dns-udp-69336a47
controller-revision-hash=7d5757b9ff
pod-template-generation=1
svccontroller.k3s.cattle.io/svcname=pihole-dns-udp
svccontroller.k3s.cattle.io/svcnamespace=pihole
Annotations: <none>
Status: Running
IP: 10.42.0.162
IPs:
IP: 10.42.0.162
Controlled By: DaemonSet/svclb-pihole-dns-udp-69336a47
Containers:
lb-udp-53:
Container ID: containerd://d6efc71ba395221f48a9f8297fdff45c505aa10c6189f3c1ce837a455be76f22
Image: rancher/klipper-lb:v0.4.9
Image ID: docker.io/rancher/klipper-lb@sha256:dd380f5d89a52f2a07853ff17a6048f805c1f8113b50578f3efc3efb9bcf670a
Port: 53/UDP
Host Port: 53/UDP
State: Running
Started: Tue, 14 Jan 2025 18:39:38 -0800
Last State: Terminated
Reason: Unknown
Exit Code: 255
Started: Fri, 10 Jan 2025 13:44:34 -0800
Finished: Tue, 14 Jan 2025 18:36:29 -0800
Ready: True
Restart Count: 1
Environment:
SRC_PORT: 53
SRC_RANGES: 0.0.0.0/0
DEST_PROTO: UDP
DEST_PORT: 32230
DEST_IPS: (v1:status.hostIPs)
Mounts: <none>
Conditions:
Type Status
PodReadyToStartContainers True
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes: <none>
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: CriticalAddonsOnly op=Exists
node-role.kubernetes.io/control-plane:NoSchedule op=Exists
node-role.kubernetes.io/master:NoSchedule op=Exists
node.kubernetes.io/disk-pressure:NoSchedule op=Exists
node.kubernetes.io/memory-pressure:NoSchedule op=Exists
node.kubernetes.io/not-ready:NoExecute op=Exists
node.kubernetes.io/pid-pressure:NoSchedule op=Exists
node.kubernetes.io/unreachable:NoExecute op=Exists
node.kubernetes.io/unschedulable:NoSchedule op=Exists
Events: <none>
Metadata
Metadata
Assignees
Labels
No labels