Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeproxy and coredns dont start up #6

Closed
quadebroadwell opened this issue Jun 26, 2019 · 4 comments
Closed

kubeproxy and coredns dont start up #6

quadebroadwell opened this issue Jun 26, 2019 · 4 comments

Comments

@quadebroadwell
Copy link

Describe the bug
First thanks so much for the videos and the repo here it has been super helpful.

Basically with your most recent update that sets kubernetes version to 1.14.3 cluster seems to start, but coredns and kubeproxy dont, and therefore flannel will then fail again. I am trying to run this in a top level lxc container, not directly on the host or in a vagrant machine so basically Host->lxc containter-> Kmaster/kworker . Perhaps this is causing the issue?

Also note that the previous version using 1..15.0 also did not work but in that case the issue was something else that i could not debug. Thanks again for the help!

How To Reproduce

Expected behavior

Screenshots (if any)

Screen Shot 2019-06-25 at 5 21 37 PM

Screen Shot 2019-06-25 at 5 22 24 PM

Environment (please complete the following information):

Ubuntu 18.04 host, Ubuntu 18.04 lxc top level container, centos/7 kmaster and kworker

Additional context
Add any other context about the problem here.

@justmeandopensource
Copy link
Owner

@quadebroadwell
The original bootstrap script installs whatever the latest version of Kubernetes is. And it was working fine until v1.15.0 was released. I couldn't get 1.15.0 working in LXC environment. So I locked the installation to 1.14.3. I don't remember seeing those errors.
I am going to spin up the cluster now and see if I get the same behaviour.

Thanks for pointing this out.

@justmeandopensource
Copy link
Owner

@quadebroadwell
I just tried it on my environment and it worked perfectly fine. No Issues.

~  🞂🞂 lxc list            
+----------+---------+------------------------+------+------------+-----------+
|   NAME   |  STATE  |          IPV4          | IPV6 |    TYPE    | SNAPSHOTS |
+----------+---------+------------------------+------+------------+-----------+
| kmaster  | RUNNING | 172.17.0.1 (docker0)   |      | PERSISTENT |           |
|          |         | 10.92.250.141 (eth0)   |      |            |           |
|          |         | 10.244.0.1 (cni0)      |      |            |           |
|          |         | 10.244.0.0 (flannel.1) |      |            |           |
+----------+---------+------------------------+------+------------+-----------+
| kworker1 | RUNNING | 172.17.0.1 (docker0)   |      | PERSISTENT |           |
|          |         | 10.92.250.207 (eth0)   |      |            |           |
|          |         | 10.244.1.0 (flannel.1) |      |            |           |
+----------+---------+------------------------+------+------------+-----------+
| kworker2 | RUNNING | 172.17.0.1 (docker0)   |      | PERSISTENT |           |
|          |         | 10.92.250.73 (eth0)    |      |            |           |
|          |         | 10.244.2.0 (flannel.1) |      |            |           |
+----------+---------+------------------------+------+------------+-----------+

~  🞂🞂 kubectl -n kube-system get pods
NAME                              READY   STATUS    RESTARTS   AGE
coredns-fb8b8dccf-llvpb           1/1     Running   0          18m
coredns-fb8b8dccf-n9cn6           1/1     Running   0          18m
etcd-kmaster                      1/1     Running   0          17m
kube-apiserver-kmaster            1/1     Running   0          18m
kube-controller-manager-kmaster   1/1     Running   0          18m
kube-flannel-ds-amd64-c7vtd       1/1     Running   1          17m
kube-flannel-ds-amd64-v4wwf       1/1     Running   0          16m
kube-flannel-ds-amd64-zzqwn       1/1     Running   0          18m
kube-proxy-8v8ld                  1/1     Running   0          17m
kube-proxy-clswp                  1/1     Running   0          18m
kube-proxy-hzmhx                  1/1     Running   0          16m
kube-scheduler-kmaster            1/1     Running   0          17m

And as mentioned in the video, I have created a separate lxc profile named "k8s" which is shown below.

~  🞂🞂 lxc profile show k8s
config:
  limits.cpu: "2"
  limits.memory: 2GB
  limits.memory.swap: "false"
  linux.kernel_modules: ip_tables,ip6_tables,netlink_diag,nf_nat,overlay
  raw.lxc: "lxc.apparmor.profile=unconfined\nlxc.cap.drop= \nlxc.cgroup.devices.allow=a\nlxc.mount.auto=proc:rw
    sys:rw"
  security.nesting: "true"
  security.privileged: "true"
description: Default LXD profile
devices:
  eth0:
    name: eth0
    nictype: bridged
    parent: lxdbr0
    type: nic
  root:
    path: /
    pool: default
    type: disk
name: k8s
used_by:
- /1.0/containers/kmaster
- /1.0/containers/kworker1
- /1.0/containers/kworker2

@quadebroadwell
Copy link
Author

Thanks for the response and again for your great videos, I found the issue, it seems the mtu of my bridged lxc device was set higher than the host ethernets mtu, this caused all my issues, just an fyi if anyone else runs into this, use

sudo ip link set mtu 1400 dev lxdbr0

To set the mtu of the mtu of your bridged device to a value lower than the host devices mtu.

@justmeandopensource
Copy link
Owner

@quadebroadwell
Thanks for sharing the resolution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants