Open
Description
I cannot get dual stack configuration to work on k3s with servicelb (Klipper) to work. Here is the values file I used:
DNS1: 192.168.2.254
adminPassword: admin
dualStack:
enabled: true
persistentVolumeClaim:
enabled: true
size: 100Mi
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
serviceDns:
externalTrafficPolicy: Cluster
type: LoadBalancer
serviceWeb:
externalTrafficPolicy: Cluster
http:
port: "2080"
https:
port: "2443"
type: LoadBalancer
This apparently creates two services, one for ipv4 and one for ipv6.
$ sudo kubectl get svc -n pihole
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
pihole-dhcp NodePort 10.43.225.145 <none> 67:31286/UDP 14m
pihole-dns-tcp LoadBalancer 10.43.125.101 192.168.2.235,192.168.2.240,192.168.2.241,192.168.2.242 53:30891/TCP 14m
pihole-dns-tcp-ipv6 LoadBalancer 2001:cafe:43::cde8 <pending> 53:30509/TCP 14m
pihole-dns-udp LoadBalancer 10.43.227.133 192.168.2.235,192.168.2.240,192.168.2.241,192.168.2.242 53:30797/UDP 14m
pihole-dns-udp-ipv6 LoadBalancer 2001:cafe:43::603a <pending> 53:31366/UDP 14m
pihole-web LoadBalancer 10.43.138.225 192.168.2.235,192.168.2.240,192.168.2.241,192.168.2.242 2080:31393/TCP,2443:31467/TCP 14m
pihole-web-ipv6 LoadBalancer 2001:cafe:43::661c <pending> 2080:31095/TCP,2443:32206/TCP 14m
As you can see above, only one service gets an external IP address. In this case it is always ipv4, but I think I have had ipv6 get the external IP address on occasion.
Klipper creates corresponding pods for these services.
$ sudo kubectl get pod -n kube-system|grep pihole
svclb-pihole-dns-tcp-eb679da4-55l7g 1/1 Running 0 18m
svclb-pihole-dns-tcp-eb679da4-5phpx 1/1 Running 0 18m
svclb-pihole-dns-tcp-eb679da4-clhp5 1/1 Running 0 18m
svclb-pihole-dns-tcp-eb679da4-fn7p7 1/1 Running 0 18m
svclb-pihole-dns-tcp-ipv6-1930b28c-46tfl 0/1 Pending 0 18m
svclb-pihole-dns-tcp-ipv6-1930b28c-dx456 0/1 Pending 0 18m
svclb-pihole-dns-tcp-ipv6-1930b28c-gtjtf 0/1 Pending 0 18m
svclb-pihole-dns-tcp-ipv6-1930b28c-r644b 0/1 Pending 0 18m
svclb-pihole-dns-udp-def81466-cc7c4 1/1 Running 0 18m
svclb-pihole-dns-udp-def81466-hbktg 1/1 Running 0 18m
svclb-pihole-dns-udp-def81466-mx5dr 1/1 Running 0 18m
svclb-pihole-dns-udp-def81466-sff7z 1/1 Running 0 18m
svclb-pihole-dns-udp-ipv6-7586bc32-5gl2q 0/1 Pending 0 18m
svclb-pihole-dns-udp-ipv6-7586bc32-cb7wn 0/1 Pending 0 18m
svclb-pihole-dns-udp-ipv6-7586bc32-dqm9l 0/1 Pending 0 18m
svclb-pihole-dns-udp-ipv6-7586bc32-qdq4v 0/1 Pending 0 18m
svclb-pihole-web-38f1c6a9-bxzkg 2/2 Running 0 18m
svclb-pihole-web-38f1c6a9-hn9tt 2/2 Running 0 18m
svclb-pihole-web-38f1c6a9-q26hp 2/2 Running 0 18m
svclb-pihole-web-38f1c6a9-w4h9q 2/2 Running 0 18m
svclb-pihole-web-ipv6-9b288549-4dkgq 0/2 Pending 0 18m
svclb-pihole-web-ipv6-9b288549-cs7zz 0/2 Pending 0 18m
svclb-pihole-web-ipv6-9b288549-jktgp 0/2 Pending 0 18m
svclb-pihole-web-ipv6-9b288549-rj4d8 0/2 Pending 0 18m
Drilling further into one of these pending pods, I can see this problem.
$ sudo kubectl describe pod -n kube-system svclb-pihole-web-ipv6-9b288549-4dkgq
[snip]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 20m default-scheduler 0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 3 Preemption is not helpful for scheduling.
Warning FailedScheduling 4m21s (x3 over 14m) default-scheduler 0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports. preemption: 0/4 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 3 Preemption is not helpful for scheduling.
I think if we can have one service configured for dual stack, we would not have this problem.
Metadata
Metadata
Assignees
Labels
No labels