Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DualStack: Fail to deploy dualstack cluster, kube-proxy panics #86773

Closed
LubinLew opened this issue Jan 2, 2020 · 46 comments · Fixed by #91357
Closed

DualStack: Fail to deploy dualstack cluster, kube-proxy panics #86773

LubinLew opened this issue Jan 2, 2020 · 46 comments · Fixed by #91357
Labels
kind/bug priority/important-longterm sig/network

Comments

@LubinLew
Copy link

@LubinLew LubinLew commented Jan 2, 2020

What happened:
I deployed a dualstack cluster with a config file.
First, kube-controller-manager CrashLoopBackOff,
because it add a default option --node-cidr-mask-size=24,
I deleted it from /etc/kubernetes/manifests/kube-controller-manager.yaml,
I think in dualstack mode, kube-controller-manager should ignore the --node-cidr-mask-size.
Then, kube-proxy CrashLoopBackOff,
[root@master ~]# kubectl logs -f kube-proxy-jpnl6 -n kube-system
I0102 09:57:44.553192 1 node.go:135] Successfully retrieved node IP: 172.18.130.251
I0102 09:57:44.553270 1 server_others.go:172] Using ipvs Proxier.
I0102 09:57:44.553287 1 server_others.go:174] creating dualStackProxier for ipvs.
W0102 09:57:44.555671 1 proxier.go:420] IPVS scheduler not specified, use rr by default
W0102 09:57:44.556213 1 proxier.go:420] IPVS scheduler not specified, use rr by default
W0102 09:57:44.556278 1 ipset.go:107] ipset name truncated; [KUBE-6-LOAD-BALANCER-SOURCE-CIDR] -> [KUBE-6-LOAD-BALANCER-SOURCE-CID]
W0102 09:57:44.556303 1 ipset.go:107] ipset name truncated; [KUBE-6-NODE-PORT-LOCAL-SCTP-HASH] -> [KUBE-6-NODE-PORT-LOCAL-SCTP-HAS]
I0102 09:57:44.556606 1 server.go:571] Version: v1.17.0
I0102 09:57:44.557622 1 config.go:313] Starting service config controller
I0102 09:57:44.557654 1 shared_informer.go:197] Waiting for caches to sync for service config
I0102 09:57:44.557717 1 config.go:131] Starting endpoints config controller
I0102 09:57:44.557753 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
W0102 09:57:44.560310 1 meta_proxier.go:106] failed to add endpoints kube-system/kube-scheduler with error failed to identify ipfamily for endpoints (no subsets)
W0102 09:57:44.560337 1 meta_proxier.go:106] failed to add endpoints kube-system/kube-dns with error failed to identify ipfamily for endpoints (no subsets)
W0102 09:57:44.560428 1 meta_proxier.go:106] failed to add endpoints kube-system/kube-controller-manager with error failed to identify ipfamily for endpoints (no subsets)
E0102 09:57:44.560646 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 29 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x1682120, 0x27f9a40)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x1682120, 0x27f9a40)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/pkg/proxy/ipvs.(*metaProxier).OnServiceAdd(0xc0003ba330, 0xc0001c3200)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/proxy/ipvs/meta_proxier.go:65 +0x2b
k8s.io/kubernetes/pkg/proxy/config.(*ServiceConfig).handleAddService(0xc0003352c0, 0x1869ac0, 0xc0001c3200)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/proxy/config/config.go:333 +0x82
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:198
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1.1(0xf, 0xc00031a1c0, 0x0)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:658 +0x218
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0, 0xc000594dd8, 0xc000557610, 0xf)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:292 +0x51
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:652 +0x79
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00046b740)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000594f40, 0xdf8475800, 0x0, 0xc000686601, 0xc00009a240)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc000478100)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:650 +0x9b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0003be840, 0xc000428580)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x59
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x62
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x14be59b]

goroutine 29 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:55 +0x105
panic(0x1682120, 0x27f9a40)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/pkg/proxy/ipvs.(*metaProxier).OnServiceAdd(0xc0003ba330, 0xc0001c3200)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/proxy/ipvs/meta_proxier.go:65 +0x2b
k8s.io/kubernetes/pkg/proxy/config.(*ServiceConfig).handleAddService(0xc0003352c0, 0x1869ac0, 0xc0001c3200)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/proxy/config/config.go:333 +0x82
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:198
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1.1(0xf, 0xc00031a1c0, 0x0)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:658 +0x218
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.ExponentialBackoff(0x989680, 0x3ff0000000000000, 0x3fb999999999999a, 0x5, 0x0, 0xc000594dd8, 0xc000557610, 0xf)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:292 +0x51
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:652 +0x79
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil.func1(0xc00046b740)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:152 +0x5e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000594f40, 0xdf8475800, 0x0, 0xc000686601, 0xc00009a240)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:153 +0xf8
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc000478100)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:650 +0x9b
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc0003be840, 0xc000428580)
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x59
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:69 +0x62

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
  • OS (e.g: cat /etc/os-release):
    CentOS Linux release 7.7.1908 (Core)
  • Kernel (e.g. uname -a):
    Linux master 3.10.0-1062.9.1.el7.x86_64 #1 SMP Fri Dec 6 15:49:49 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Network plugin and version (if this is a network-related bug):
  • kubeadm init config file:
    kubeadm-conf.txt
@LubinLew LubinLew added the kind/bug label Jan 2, 2020
@k8s-ci-robot k8s-ci-robot added the needs-sig label Jan 2, 2020
@neolit123
Copy link
Member

@neolit123 neolit123 commented Jan 2, 2020

/workspace/anago-v1.17.0-rc.2.10+70132b0f130acc/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/proxy/ipvs/meta_proxier.go:65

you seem to be using a pre-release version of kube-proxy (v1.17-rc.2.10+70132b0f130acc). try v1.17.0.

if v1.17.0 also does not work, try using mode: iptables instead of IPVS.

@kubernetes/sig-network-bugs

@k8s-ci-robot k8s-ci-robot added sig/network and removed needs-sig labels Jan 2, 2020
@athenabot
Copy link

@athenabot athenabot commented Jan 2, 2020

/triage unresolved

Comment /remove-triage unresolved when the issue is assessed and confirmed.

🤖 I am a bot run by vllry. 👩‍🔬

@k8s-ci-robot k8s-ci-robot added the triage/unresolved label Jan 2, 2020
@aojea
Copy link
Member

@aojea aojea commented Jan 2, 2020

kube-proxy dual-stack support with iptables is still pending #82462

The --node-cidr-mask-size was fixed in #85609

It seems that the panic is because is trying to add an endpoint without ipFamily

W0102 09:57:44.560310 1 meta_proxier.go:106] failed to add endpoints kube-system/kube-scheduler with error failed to identify ipfamily for endpoints (no subsets)
W0102 09:57:44.560337 1 meta_proxier.go:106] failed to add endpoints kube-system/kube-dns with error failed to identify ipfamily for endpoints (no subsets)
W0102 09:57:44.560428 1 meta_proxier.go:106] failed to add endpoints kube-system/kube-controller-manager with error failed to identify ipfamily for endpoints (no subsets)
E0102 09:57:44.560646 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 29 [running]:
....
/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/pkg/proxy/ipvs.(*metaProxier).OnServiceAdd(0xc0003ba330, 0xc0001c3200)
/workspace/anago-v1.17.0-

@aramase @khenidak @uablrek does this ring a bell?
why the mataproxier panics if there is no ip family?

@uablrek
Copy link
Contributor

@uablrek uablrek commented Jan 3, 2020

The reason for the panic is not hard to see;

func (proxier *metaProxier) OnServiceAdd(service *v1.Service) {
if *(service.Spec.IPFamily) == v1.IPv4Protocol {
proxier.ipv4Proxier.OnServiceAdd(service)

There is no check for nil.

The reason why IPFamily is nil is less clear. I tried to set IPv6DualStack:false for the "master" K8s processes, but keep IPv6DualStack:true on kube-proxy and I get exactly the panic described in this issue.

So I think the problem is cluster misconfiguration.

I am unsure if the panic is acceptable. The error indication could be better of course but IMHO the kube-proxy shall not "help" the user in this case by make some assumption of ipv4 for instance. That would hide a serious misconfiguration.

@uablrek
Copy link
Contributor

@uablrek uablrek commented Jan 3, 2020

BTW the reverse; IPv6DualStack:true on the master processes but IPv6DualStack:false in kubelet and kube-proxy should be OK IMO. That would allow a "migration" ipv4->dual-stack with a rolling upgrade. I have intended to test this but havn't got the time yet.

@uablrek
Copy link
Contributor

@uablrek uablrek commented Jan 3, 2020

I tested using a local build on the "master" branch;

# kubectl version --short
Client Version: v1.18.0-alpha.1.305+65ef5dcc513ccf
Server Version: v1.18.0-alpha.1.305+65ef5dcc513ccf

@LubinLew
Copy link
Author

@LubinLew LubinLew commented Jan 3, 2020

@uablrek
the config file here, which field is wrong ?
kubeadm-conf.txt

@uablrek
Copy link
Contributor

@uablrek uablrek commented Jan 3, 2020

Here is my config; kubeadm-config.yaml.txt

It works (for me at least) and a difference I see is how the featureGate is specified. I am not an expert on kubeadm so I can't say more exactly what the problem is.

@aojea
Copy link
Member

@aojea aojea commented Jan 3, 2020

@LubinLew seems you are not configuring dual-stack cidrs on these fields

serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16

@uablrek indeed a panic doesn't look good . I think that is it possible to hit this scenario not only by misconfiguration, maybe if we want to migrate a cluster from single-stack to dual-stack?

@thockin @khenidak how do you think that should be handled this scenario #86773 (comment)

@uablrek
Copy link
Contributor

@uablrek uablrek commented Jan 3, 2020

@aojea

maybe if we want to migrate a cluster from single-stack to dual-stack?

To migrate ipv4->dual-stack can be reduced to enable dual-stack in k8s >=v1.17.0. The upgrade of a ipv4 cluster to >=v1.17.0 must work, so that is a no-issue. Once you are on k8s >=v1.17.0 I think the best way is to first enable dual-stack on the master(s), updating CIDRs etc, and let the workers stay with IPv6DualStack:false. Then re-boot them with IPv6DualStack:true one-by-one.

Then the case is the reverse as commented above #86773 (comment).

But this has to be discussed some place else 😃

@LubinLew
Copy link
Author

@LubinLew LubinLew commented Jan 3, 2020

@LubinLew seems you are not configuring dual-stack cidrs on these fields

serviceSubnet: 10.96.0.0/12
podSubnet: 10.244.0.0/16

@uablrek indeed a panic doesn't look good . I think that is it possible to hit this scenario not only by misconfiguration, maybe if we want to migrate a cluster from single-stack to dual-stack?

@thockin @khenidak how do you think that should be handled this scenario #86773 (comment)

I tried to configure dual-stack cidrs on these fields, but kubeadm init failed.

@aojea
Copy link
Member

@aojea aojea commented Jan 3, 2020

@LubinLew you don't have to configure the gates in each component, kubeadm do that automatically since #80531
This config works for me setting dual-stack cidrs on serviceSubnet and podSubnet

  apiVersion: kubeadm.k8s.io/v1beta2
  kind: ClusterConfiguration
  featureGates:
    IPv6DualStack: true

@neolit123
Copy link
Member

@neolit123 neolit123 commented Jan 3, 2020

@uablrek
Copy link
Contributor

@uablrek uablrek commented Jan 4, 2020

Yes, a check for nil should be added. But maybe not a fallback to ipv4, but rather an important looking log error and just return.

@anguslees
Copy link
Member

@anguslees anguslees commented Jan 5, 2020

Why an error? Shouldn't "unspecified service.Spec.IPFamily" be interpreted as IPv4 (especially!) when IPv6DualStack=false? For both backwards compatibility (easier upgrades from when all services had to be IPv4), and .. afaics there's no other reasonable interpretation in this situation.

(Note I've deliberately avoided saying "IPFamily should default to IPv4" above. I think the long-term goal needs to be that Services with IPFamily=nil are automatically IPv6 or dual-stack, precisely so most cluster users won't have to care about low-level IP addressing details.)

@uablrek
Copy link
Contributor

@uablrek uablrek commented Jan 6, 2020

Because it is a configuration error. Since the user has enabled the feature-gate half-way he/she expects dual-stack to work, but it can't. If this faulty configuration is just accepted this issue will be the first in an endless stream of duplicates.

An unspecified family will be set to the "main" family of the cluster (which may be ipv6) by the master processes (api-server?) when the feature-gate is enabled which ensures backward compatibility. But the decision which family is made by the master, not kube-proxy.

@khenidak
Copy link
Contributor

@khenidak khenidak commented Jan 6, 2020

was the feature gate enabled on the api-server?

if so then this field will always be there. There is no need to check for nil. What is supported is api-server feature flag:on + kube proxy's feature flag: off. But not the other way around

@aojea
Copy link
Member

@aojea aojea commented Jan 7, 2020

This is related to the unspecified service.Spec.IPFamily topic
#86895

@liggitt liggitt added the priority/critical-urgent label Jan 17, 2020
@liggitt
Copy link
Member

@liggitt liggitt commented Jan 17, 2020

Clients must currently handle nil values in this field, especially in services of type ExternalName

if so then this field will always be there

that is not accurate for ExternalName services

@dimm0
Copy link

@dimm0 dimm0 commented Jan 24, 2020

Also hit this, trying to migrate 17.1 to dual-stack
From reading the thread it's not clear if there's a fix yet

@aojea
Copy link
Member

@aojea aojea commented Jan 24, 2020

@dimm0 please check that you have dualstack enabled in the apiserver as explained in the following comment

#86773 (comment)

@dimm0
Copy link

@dimm0 dimm0 commented Jan 24, 2020

Nope.. It's not in the documentation as far as I can see

@Richard87
Copy link

@Richard87 Richard87 commented Mar 6, 2020

Hi, so the error is because ipFamily is not set on a service, but what should the ipFamily be on a headless service?

kubectl get svc -o custom-columns=NAME:.metadata.name,NAMESPACE:.metadata.namespace,IPFAMILY:.spec.ipFamily,TYPE:.spec.type,CLUSTER-IP:.spec.clusterIP -n keycloak     
NAME                           NAMESPACE   IPFAMILY   TYPE        CLUSTER-IP
keycloak-headless              keycloak    <none>     ClusterIP   None
keycloak-http                  keycloak    IPv4       ClusterIP   10.152.183.156
keycloak-postgresql            keycloak    IPv4       ClusterIP   10.152.183.164
keycloak-postgresql-headless   keycloak    <none>     ClusterIP   None

Edit:

Setting all my headless services to IPv4 let kube-proxy boot up as expected...

Edit 2:

I keep getting these errors now?

mars 06 10:47:40 linux microk8s.daemon-proxy[2214945]: I0306 10:47:40.174725 2214945 proxier.go:1001] syncProxyRules took 125.706423ms
mars 06 10:47:40 linux microk8s.daemon-proxy[2214945]: I0306 10:47:40.174738 2214945 bounded_frequency_runner.go:296] sync-runner: ran, next possible in 0s, periodic in 30s
mars 06 10:47:40 linux microk8s.daemon-proxy[2214945]: I0306 10:47:40.427417 2214945 config.go:167] Calling handler.OnEndpointsUpdate
mars 06 10:47:40 linux microk8s.daemon-proxy[2214945]: W0306 10:47:40.427467 2214945 meta_proxier.go:121] failed to update endpoints kube-system/kube-controller-manager with error failed to identify ipfamily for endpoints (no subsets)
mars 06 10:47:40 linux microk8s.daemon-proxy[2214945]: I0306 10:47:40.740942 2214945 config.go:167] Calling handler.OnEndpointsUpdate
mars 06 10:47:40 linux microk8s.daemon-proxy[2214945]: W0306 10:47:40.740986 2214945 meta_proxier.go:121] failed to update endpoints kube-system/kube-scheduler with error failed to identify ipfamily for endpoints (no subsets)
mars 06 10:47:42 linux microk8s.daemon-proxy[2214945]: I0306 10:47:42.441107 2214945 config.go:167] Calling handler.OnEndpointsUpdate
mars 06 10:47:42 linux microk8s.daemon-proxy[2214945]: W0306 10:47:42.441126 2214945 meta_proxier.go:121] failed to update endpoints kube-system/kube-controller-manager with error failed to identify ipfamily for endpoints (no subsets)
mars 06 10:47:42 linux microk8s.daemon-proxy[2214945]: I0306 10:47:42.767639 2214945 config.go:167] Calling handler.OnEndpointsUpdate
mars 06 10:47:42 linux microk8s.daemon-proxy[2214945]: W0306 10:47:42.767681 2214945 meta_proxier.go:121] failed to update endpoints kube-system/kube-scheduler with error failed to identify ipfamily for endpoints (no subsets)
mars 06 10:47:44 linux microk8s.daemon-proxy[2214945]: I0306 10:47:44.448613 2214945 config.go:167] Calling handler.OnEndpointsUpdate
mars 06 10:47:44 linux microk8s.daemon-proxy[2214945]: W0306 10:47:44.448630 2214945 meta_proxier.go:121] failed to update endpoints kube-system/kube-controller-manager with error failed to identify ipfamily for endpoints (no subsets)
mars 06 10:47:44 linux microk8s.daemon-proxy[2214945]: I0306 10:47:44.787737 2214945 config.go:167] Calling handler.OnEndpointsUpdate
mars 06 10:47:44 linux microk8s.daemon-proxy[2214945]: W0306 10:47:44.787783 2214945 meta_proxier.go:121] failed to update endpoints kube-system/kube-scheduler with error failed to identify ipfamily for endpoints (no subsets)
mars 06 10:47:46 linux microk8s.daemon-proxy[2214945]: I0306 10:47:46.468723 2214945 config.go:167] Calling handler.OnEndpointsUpdate
mars 06 10:47:46 linux microk8s.daemon-proxy[2214945]: W0306 10:47:46.468797 2214945 meta_proxier.go:121] failed to update endpoints kube-system/kube-controller-manager with error failed to identify ipfamily for endpoints (no subsets)
mars 06 10:47:46 linux microk8s.daemon-proxy[2214945]: I0306 10:47:46.809594 2214945 config.go:167] Calling handler.OnEndpointsUpdate
mars 06 10:47:46 linux microk8s.daemon-proxy[2214945]: W0306 10:47:46.809658 2214945 meta_proxier.go:121] failed to update endpoints kube-system/kube-scheduler with error failed to identify ipfamily for endpoints (no subsets)

@uablrek
Copy link
Contributor

@uablrek uablrek commented Mar 6, 2020

Related #88784

@aojea
Copy link
Member

@aojea aojea commented Mar 6, 2020

@Richard87

Hi, so the error is because ipFamily is not set on a service, but what should the ipFamily be on a headless service?

that's the 1M dollar question ;)
#86895

seems we are getting closer to solve this

@deshui123
Copy link

@deshui123 deshui123 commented Mar 9, 2020

Same error.
kube-proxy is v1.17.0.
Using mode:ipvs for dual-stack

[root@henry-dual-we-01 ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.3", GitCommit:"06ad960bfd03b39c8310aaf92d1e7c12ce618213", GitTreeState:"clean", BuildDate:"2020-02-11T18:12:12Z", GoVersion:"go1.13.6", Compiler:"gc", Platform:"linux/amd64"}
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
kubernetesVersion: v1.17.0
featureGates:
  IPv6DualStack: true
controllerManager:
  extraArgs:
    external-cloud-volume-plugin: openstack
    cluster-cidr: 192.168.0.0/22,2001:283:4000:2002::/62
    service-cluster-ip-range: 10.253.0.0/16,fd01:abce::/112
networking:
  serviceSubnet: 10.253.0.0/16,fd01:abce::/112
  podSubnet: 192.168.0.0/22,2001:283:4000:2002::/62
  dnsDomain: "cluster.local"
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
clusterCIDR: 192.168.0.0/22,2001:283:4000:2002::/62
featureGates:
  SupportIPVSProxyMode: true
  IPv6DualStack: true
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
featureGates:
  IPv6DualStack: true

Refer to the validation guide (https://kubernetes.io/docs/tasks/network/validate-dual-stack/#validate-pod-addressing), pods, nodes and services work well, but there are the same error in kube-proxy logs

[root@henry-dual-we-01 ~]# kubectl logs -f -n kube-system   kube-proxy-k8cq5
W0304 03:42:56.847495       1 feature_gate.go:235] Setting GA feature gate SupportIPVSProxyMode=true. It will be removed in a future release.
I0304 03:43:47.472881       1 node.go:135] Successfully retrieved node IP: 10.75.72.170
I0304 03:43:47.473227       1 server_others.go:172] Using ipvs Proxier.
I0304 03:43:47.473273       1 server_others.go:174] creating dualStackProxier for ipvs.
W0304 03:43:47.485272       1 proxier.go:420] IPVS scheduler not specified, use rr by default
W0304 03:43:47.485643       1 proxier.go:420] IPVS scheduler not specified, use rr by default
W0304 03:43:47.485701       1 ipset.go:107] ipset name truncated; [KUBE-6-LOAD-BALANCER-SOURCE-CIDR] -> [KUBE-6-LOAD-BALANCER-SOURCE-CID]
W0304 03:43:47.485730       1 ipset.go:107] ipset name truncated; [KUBE-6-NODE-PORT-LOCAL-SCTP-HASH] -> [KUBE-6-NODE-PORT-LOCAL-SCTP-HAS]
I0304 03:43:47.507612       1 server.go:571] Version: v1.17.0
I0304 03:43:47.564028       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0304 03:43:47.564068       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0304 03:43:47.565970       1 conntrack.go:83] Setting conntrack hashsize to 32768
I0304 03:43:47.575072       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0304 03:43:47.578432       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0304 03:43:47.585859       1 config.go:313] Starting service config controller
I0304 03:43:47.605239       1 shared_informer.go:197] Waiting for caches to sync for service config
I0304 03:43:47.606916       1 config.go:131] Starting endpoints config controller
I0304 03:43:47.606959       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
W0304 03:43:47.719335       1 meta_proxier.go:106] failed to add endpoints kube-system/kube-scheduler with error failed to identify ipfamily for endpoints (no subsets)
W0304 03:43:47.719645       1 meta_proxier.go:106] failed to add endpoints default/my-service-default with error failed to identify ipfamily for endpoints (no subsets)
W0304 03:43:47.719768       1 meta_proxier.go:106] failed to add endpoints default/my-service-ipv6 with error failed to identify ipfamily for endpoints (no subsets)

Want to know if there is some way to specify the ipfamily?
Is this error caused by "W0304 03:43:47.485272 1 proxier.go:420] IPVS scheduler not specified, use rr by default"?

[root@henry-dual-we-01 ~]# kubectl get endpoints -n kube-system
NAME                       ENDPOINTS                                                              AGE
cloud-controller-manager   <none>                                                                 5d16h
kube-controller-manager    <none>                                                                 5d17h
kube-dns                   192.168.178.198:53,192.168.178.199:53,192.168.178.198:53 + 3 more...   5d17h
kube-scheduler             <none>                                                                 5d17h

@uablrek
Copy link
Contributor

@uablrek uablrek commented Mar 9, 2020

Is this error caused by "W0304 03:43:47.485272 1 proxier.go:420] IPVS scheduler not specified, use rr by default"?

No, that is just informative (should not even be a warning IMHO)

@deshui123
Copy link

@deshui123 deshui123 commented Mar 13, 2020

How to resolve these errors:

W0304 03:43:47.719335       1 meta_proxier.go:106] failed to add endpoints kube-system/kube-scheduler with error failed to identify ipfamily for endpoints (no subsets)
W0304 03:43:47.719645       1 meta_proxier.go:106] failed to add endpoints default/my-service-default with error failed to identify ipfamily for endpoints (no subsets)
W0304 03:43:47.719768       1 meta_proxier.go:106] failed to add endpoints default/my-service-ipv6 with error failed to identify ipfamily for endpoints (no subsets)

and there is no endpoints

[root@henry-dual-we-01 ~]# kubectl get endpoints -n kube-system
NAME                       ENDPOINTS                                                              AGE
cloud-controller-manager   <none>                                                                 5d16h
kube-controller-manager    <none>                                                                 5d17h
kube-dns                   192.168.178.198:53,192.168.178.199:53,192.168.178.198:53 + 3 more...   5d17h
kube-scheduler             <none>         

@aojea
Copy link
Member

@aojea aojea commented Mar 13, 2020

@deshui123 it will be fixed in next release #88934

@aojea
Copy link
Member

@aojea aojea commented Mar 13, 2020

/remove priority
The panics are caused because of a misconfiguration of the user, ALL the components have to have the dual-stack feature gate enabled, however, we can do better with the logs, but that's not critical.

@aojea
Copy link
Member

@aojea aojea commented Mar 13, 2020

/remove-priority critical-urgent

@k8s-ci-robot k8s-ci-robot removed the priority/critical-urgent label Mar 13, 2020
@pacoxu
Copy link
Member

@pacoxu pacoxu commented Mar 17, 2020

I meet same issue in 1.18.0-beta.1
After I open apiserver feature gate "IPV6DualStack=true", kube proxy starts correctly.

kubernetes/website#19682

@neolit123
Copy link
Member

@neolit123 neolit123 commented Mar 17, 2020

panics should def. be resolved before moving the feature to beta.
/priority important-longterm

kubernetes/website#19682

LGTM.

@k8s-ci-robot k8s-ci-robot added the priority/important-longterm label Mar 17, 2020
@neolit123
Copy link
Member

@neolit123 neolit123 commented Mar 17, 2020

/retitle Fail to deploy dualstack cluster, kube-proxy panics

@k8s-ci-robot k8s-ci-robot changed the title Fail to deploy dualstack cluster, kube-proxy CrashLoopBackOff Fail to deploy dualstack cluster, kube-proxy panics Mar 17, 2020
@liggitt liggitt changed the title Fail to deploy dualstack cluster, kube-proxy panics DualStack: Fail to deploy dualstack cluster, kube-proxy panics Apr 11, 2020
@prabhakar-pal
Copy link

@prabhakar-pal prabhakar-pal commented Apr 18, 2020

@aojea @LubinLew ,

This issue is seen even in 1.18.1.

E0418 12:30:29.589474       1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 98 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x16d00a0, 0x2886660)

This is the kubeadm conf that we are using


apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: <token>
  ttl: 0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.10.10.120
  bindPort: 9443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: 10.10.10.120
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  certSANs:
  - 127.0.0.1
  - ::1
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/maglev/.pki
clusterName: kubernetes
controlPlaneEndpoint: ""
controllerManager: {}
dns:
  type: CoreDNS
featureGates:
  IPv6DualStack: true
etcd:
  external:
    caFile: ""
    certFile: ""
    endpoints:
    - http://localhost:2379
    keyFile: ""
imageRepository: maglev-registry.maglev-system.svc.cluster.local:5000
kind: ClusterConfiguration
kubernetesVersion: "v1.18.1"
networking:
  podSubnet: 10.2.0.0/16,3ffd::/112
  serviceSubnet: 10.3.0.0/16,3ffe::/112
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs

@tedyu
Copy link
Contributor

@tedyu tedyu commented Apr 18, 2020

@prabhakar-pal
Can you give the full stack trace for the panic ?

thanks

@prabhakar-pal
Copy link

@prabhakar-pal prabhakar-pal commented Apr 18, 2020

@tedyu,

I0418 16:30:33.332872 1 shared_informer.go:223] Waiting for caches to sync for service config
E0418 16:30:33.337036 1 runtime.go:78] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)
goroutine 161 [running]:
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.logPanic(0x16d00a0, 0x2886660)
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:74 +0xa3
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:48 +0x82
panic(0x16d00a0, 0x2886660)
/usr/local/go/src/runtime/panic.go:679 +0x1b2
k8s.io/kubernetes/pkg/proxy/metaproxier.(*metaProxier).OnServiceAdd(0xc000424ba0, 0xc000203b00)
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/proxy/metaproxier/meta_proxier.go:65 +0x2b
k8s.io/kubernetes/pkg/proxy/config.(*ServiceConfig).handleAddService(0xc000551640, 0x18c34e0, 0xc000203b00)
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/proxy/config/config.go:335 +0x82
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnAdd(...)
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:218
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run.func1()
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:744 +0x221
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc00046a740)
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5e
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000586f40, 0x1b3edc0, 0xc00083a000, 0xc0002dc001, 0xc0007be000)
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xa3
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00046a740, 0x3b9aca00, 0x0, 0x42ea01, 0xc0007be000)
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0xe2
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90
k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run(0xc0004e7f00)
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/shared_informer.go:738 +0x95
k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000295e20, 0xc0006e8000)
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x59
created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
/workspace/anago-v1.18.1-beta.0.38+49aac775931dd1/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x62
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x1463c1b]

@tedyu
Copy link
Contributor

@tedyu tedyu commented Apr 18, 2020

@prabhakar-pal
Please see #88866

@prabhakar-pal
Copy link

@prabhakar-pal prabhakar-pal commented Apr 18, 2020

Thanks @tedyu, is there a workaround for this issue?

@aojea
Copy link
Member

@aojea aojea commented Apr 18, 2020

@prabhakar-pal I think that your kubeadm config is not correct, compare against this

apiServer:
  certSANs:
  - localhost
  - 127.0.0.1
  extraArgs:
    feature-gates: IPv6DualStack=true
apiVersion: kubeadm.k8s.io/v1beta2
clusterName: kind
controlPlaneEndpoint: 172.17.0.2:6443
controllerManager:
  extraArgs:
    enable-hostpath-provisioner: "true"
    feature-gates: IPv6DualStack=true
featureGates:
  IPv6DualStack: true
kind: ClusterConfiguration
kubernetesVersion: v1.18.0-alpha.2.580+acd97b42f3acb0
networking:
  podSubnet: 10.244.0.0/16,fd00:10:244::/64
  serviceSubnet: 10.96.0.0/12,fd00:10:96::/112
scheduler:
  extraArgs:
    feature-gates: IPv6DualStack=true
---
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- token: abcdef.0123456789abcdef
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 172.17.0.2
  bindPort: 6443
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubeadm.k8s.io/v1beta2
controlPlane:
  localAPIEndpoint:
    advertiseAddress: 172.17.0.2
    bindPort: 6443
discovery:
  bootstrapToken:
    apiServerEndpoint: 172.17.0.2:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
kind: JoinConfiguration
nodeRegistration:
  criSocket: /run/containerd/containerd.sock
  kubeletExtraArgs:
    fail-swap-on: "false"
    node-ip: 172.17.0.2
---
apiVersion: kubelet.config.k8s.io/v1beta1
evictionHard:
  imagefs.available: 0%
  nodefs.available: 0%
  nodefs.inodesFree: 0%
featureGates:
  IPv6DualStack: true
imageGCHighThresholdPercent: 100
kind: KubeletConfiguration
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
featureGates:
  IPv6DualStack: true
kind: KubeProxyConfiguration

@prabhakar-pal
Copy link

@prabhakar-pal prabhakar-pal commented Apr 19, 2020

@aojea, yes the IPv6DualStack featureGate needed to be mentioned across the sections, thanks for pointing that out. Thanks @tedyu for your inputs too.

@ajitkumartanwade
Copy link

@ajitkumartanwade ajitkumartanwade commented Apr 30, 2020

Hi All,

We are trying to set up a Kubernetes cluster with IPv4 and IPv6 dual-stack. It is assigning both IPs to a pod

[root@ip-192-168-1-15 /]# kubectl get pods hello-world-86d6c6f84d-zn2x6 -o go-template --template='{{range .status.podIPs}}{{printf "%s \n" .ip}}{{end}}'
192.168.53.68
2600:1f16:5ec:ed43:f197:cbfc:7667:7ec3

Kubeadm Config file:
apiVersion: kubeadm.k8s.io/v1beta2
kind: ClusterConfiguration
featureGates:
IPv6DualStack: true
apiServer:
extraArgs:
feature-gates: IPv6DualStack=true
controllerManager:
extraArgs:
feature-gates: IPv6DualStack=true
cluster-cidr: 192.168.0.0/16,2600:1f16:5ec:ed00::/56
service-cluster-ip-range: 192.168.0.0/16,2600:1f16:5ec:ed00::/56
networking:
dnsDomain: cluster.local
feature-gates: IPv6DualStack=true
scheduler:
extraArgs:
feature-gates: IPv6DualStack=true
kubeProxy:
feature-gates: IPv6DualStack=true
cluster-cidr: 192.168.0.0/16,2600:1f16:5ec:ed00::/56
mode: ipvs
kubelet:
feature-gates: IPv6DualStack=true

Updated Calio file as well for dual-stack, however when we are trying to ping both the pods from each other over IPV. It is not replying.

How do we ask pods to communicate using IPv6 ips? please assist

Thanks in advance

Regards,
Ajit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug priority/important-longterm sig/network
Projects
None yet
Development

Successfully merging a pull request may close this issue.