Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

spec: {} empty, does not populate node.Spec.PodCIDR #85809

Open
jcpuzic opened this issue Dec 2, 2019 · 12 comments

Comments

@jcpuzic
Copy link

@jcpuzic jcpuzic commented Dec 2, 2019

What happened:
I'm installing k8s 1.16.3 with the command:
kubeadm init --pod-network-cidr=10.100.0.0/16

My kube-controller-manager manifest does have the lines:
--allocate-node-cidrs=true
--cluster-cidr=10.100.0.0/16

I don't see 'podCIDR' populated in my node spec field. This is problematic as kube-router relies on that field.

What you expected to happen:
On an older cluster 1.15.0, the spec will have 'podCIDR' line in it.

How to reproduce it (as minimally and precisely as possible):
Run above kubeadm command

Anything else we need to know?:

Environment:

  • Kubernetes version (use kubectl version): v1.16.3
  • Cloud provider or hardware configuration: bare metal
  • OS (e.g: cat /etc/os-release): Debian Buster 10.2
  • Kernel (e.g. uname -a): 4.19.0-6-amd64
  • Install tools: kubeadm
  • Network plugin and version (if this is a network-related bug): kube-router, latest
  • Others:
@jcpuzic jcpuzic added the kind/bug label Dec 2, 2019
@athenabot

This comment has been minimized.

Copy link

@athenabot athenabot commented Dec 2, 2019

/sig cluster-lifecycle

These SIGs are my best guesses for this issue. Please comment /remove-sig <name> if I am incorrect about one.

🤖 I am a bot run by vllry. 👩‍🔬

@jcpuzic

This comment has been minimized.

Copy link
Author

@jcpuzic jcpuzic commented Dec 2, 2019

/sig cluster-lifecycle

@jcpuzic

This comment has been minimized.

Copy link
Author

@jcpuzic jcpuzic commented Dec 2, 2019

After some more testing, it seem this is an issue with 1.16.0+ versions.

@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Dec 2, 2019

I don't see 'podCIDR' populated in my node spec field. This is problematic as kube-router relies on that field.

just to note, when you call kubeadm init the Node object is not created by kubeadm, but rather by the Kubelet.

On an older cluster 1.15.0, the spec will have 'podCIDR' line in it.

are you sure?
this playground creates a 1.14 cluster:
https://www.katacoda.com/courses/kubernetes/playground

and node.Spec only has taints if you do:
kubectl get no master -o yaml.

/priority awaiting-more-evidence
/remove-kind bug

@jcpuzic

This comment has been minimized.

Copy link
Author

@jcpuzic jcpuzic commented Dec 2, 2019

You have to use the "--pod-network-cidr=" flag with "kubeadm init" for it to populate. If I do this with a clean setup on versions previous to 1.16.x it populated just fine.

@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Dec 2, 2019

i will try it locally in a bit.
the katacoda playground uses weave which doesn't set pod-network-cidr.

@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Dec 2, 2019

Kubernetes version (use kubectl version): v1.16.3

i just tried the same version with Calico.

sudo kubeadm init --ignore-preflight-errors=all --v=5 --pod-network-cidr=192.168.0.0/16 --kubernetes-version=v1.16.3

...

kubectl apply -f https://docs.projectcalico.org/v3.8/manifests/calico.yaml
kubectl get no luboitvbox -o yaml | grep spec -C 5

...
spec:
  podCIDR: 192.168.0.0/24
  podCIDRs:
  - 192.168.0.0/24
  taints:
  - effect: NoSchedule

i cannot confirm this part:

I don't see 'podCIDR' populated in my node spec field. This is problematic as kube-router relies on that field.

@jcpuzic

This comment has been minimized.

Copy link
Author

@jcpuzic jcpuzic commented Dec 3, 2019

For reproduction purposes, here is the exact line I used:
kubeadm init --pod-network-cidr=10.100.0.0/16 --kubernetes-version=v1.16.3

After the cluster master came up, that's when I checked the node specs. Additionally, I was using Kube-Router with this manifest:
https://github.com/cloudnativelabs/kube-router/blob/master/daemonset/generic-kuberouter.yaml

@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Dec 3, 2019

given Calico works, this seems like a kube-router issue to me.
did you report it to the kube-router maintainers?

@jcpuzic

This comment has been minimized.

Copy link
Author

@jcpuzic jcpuzic commented Dec 3, 2019

I don't agree, I'm reporting a difference in behavior before any CNI is installed.

@neolit123

This comment has been minimized.

Copy link
Member

@neolit123 neolit123 commented Dec 3, 2019

/remove-sig cluster-lifecycle
/sig network

would appreciated feedback from sig-network or the kube-router maintainers.

@athenabot

This comment has been minimized.

Copy link

@athenabot athenabot commented Dec 3, 2019

/triage unresolved

Comment /remove-triage unresolved when the issue is assessed and confirmed.

🤖 I am a bot run by vllry. 👩‍🔬

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
4 participants
You can’t perform that action at this time.