Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-config enable-worker can't find master... #140

Open
derailed opened this issue Oct 25, 2016 · 11 comments
Open

kube-config enable-worker can't find master... #140

derailed opened this issue Oct 25, 2016 · 11 comments

Comments

@derailed
Copy link

Hi,

First off, thank you so much for putting together this distro. Way cool!

I think I have a good install of v0.8.0 and the master node seems healthy. Using rpi-3 and Hypriot install.

kubectl get no - reports successfully.

However when I try to have my minions join the master I get the following error:

kube-config enable-worker 192.168.0.12
Using master ip: 192.168.0.12
The Kubernetes master was not found.
Exiting...

Usage:
kube-config enable-worker [master-ip]

I've checked /boot/cmdline.txt and the cgroup is correctly set now. I can ping the master ip but think it might be a firewall issue but not really sure how to resolve as my unix foo is failing me. Ping master from a minion works and telnet on 443 works too.

What am I missing?

Thank you!

@luxas
Copy link
Owner

luxas commented Oct 25, 2016

I don't know but any chance you can follow the instructions on http://kubernetes.io/docs/getting-started-guides/kubeadm/ on a plain HypriotOS v1.0.1+ install?

@derailed
Copy link
Author

Thanks Lucas. This is exactly what I've tried before stumbling on your blog
post.

I really wanted to leverage kubeadm on arm but it did not work for me with
HypriotOS was using v1.0
I am getting the infamous waiting for control plan to be ready but nothing
is happening with docker
ie no images pull nor running containers. Think the kubelet on arm is
choking on that install??

But if you think, folks are able to make it work, I am all ears...

Thanks!

On Tue, Oct 25, 2016 at 3:48 PM, Lucas Käldström notifications@github.com
wrote:

I don't know but any chance you can follow the instructions on
http://kubernetes.io/docs/getting-started-guides/kubeadm/ on a plain
HypriotOS v1.0.1+ install?


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#140 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAAP3GA2opYDAcr0JLnrk8vMG6CNtshRks5q3nk7gaJpZM4Kghee
.

@luxas
Copy link
Owner

luxas commented Oct 26, 2016

It should be a plain HypriotOS install, follow the instructions exactly on the kubeadm page and do not install anything from kubernetes-on-arm. Also you need to set --use-kubernetes-version=v1.4.1 and --pod-network-cidr=10.244.0.0/16 as the page says

@derailed
Copy link
Author

Thanks for the advise Lucas!

I did try k8s_vers option but did not try the cidr opt.
I don't know if that will make a difference, but I'll give it another
rinse.

Also the instructions specify installing docker. My limited understanding
is docker on arm is not supported.
Does this make sense since Hypriot already comes with a configured docker?

On Wed, Oct 26, 2016 at 12:31 AM, Lucas Käldström notifications@github.com
wrote:

It should be a plain HypriotOS install, follow the instructions exactly on
the kubeadm page and do not install anything from kubernetes-on-arm. Also
you need to set --use-kubernetes-version=v1.4.1 and --pod-network-cidr=
10.244.0.0/16 as the page says


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#140 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAAP3J8CGghcRgBnvde4sVzepWpWyzxAks5q3vPTgaJpZM4Kghee
.

@derailed
Copy link
Author

Do you know if kube-config is using port 443 or 8080 to talk with the
master?

On Tue, Oct 25, 2016 at 5:07 PM, Fernand Galiana fernand.galiana@gmail.com
wrote:

Thanks Lucas. This is exactly what I've tried before stumbling on your
blog post.

I really wanted to leverage kubeadm on arm but it did not work for me with
HypriotOS was using v1.0
I am getting the infamous waiting for control plan to be ready but nothing
is happening with docker
ie no images pull nor running containers. Think the kubelet on arm is
choking on that install??

But if you think, folks are able to make it work, I am all ears...

Thanks!

On Tue, Oct 25, 2016 at 3:48 PM, Lucas Käldström <notifications@github.com

wrote:

I don't know but any chance you can follow the instructions on
http://kubernetes.io/docs/getting-started-guides/kubeadm/ on a plain
HypriotOS v1.0.1+ install?


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#140 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAAP3GA2opYDAcr0JLnrk8vMG6CNtshRks5q3nk7gaJpZM4Kghee
.

@luxas
Copy link
Owner

luxas commented Oct 26, 2016

Docker on ARM is supported yes, but as you said, there's no need to install it on HypriotOS where it is by default.

But you should use HypriotOS v1.0.1 otherwise it won't work

@derailed
Copy link
Author

Thanks Lucas!

Crap, my bad!! Totally missed the rev in the docs

Got a bit further but now getting this issue setting up the pod network.

Per the docs I need to run this cmd:

ARCH=arm curl -sSL
https://raw.githubusercontent.com/luxas/flannel/update-daemonset/Documentation/kube-flannel.yml
| sed "s/amd64/${ARCH}/g" | kubectl create -f -

Which leads to this error:

DaemonSet in version "v1beta1" cannot be handled as a DaemonSet: [pos
1115]: json: expect char '"' but got char 'n'

Guessing it's missing the req spec.selector??

apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch:
containers:
- name: kube-flannel
image: quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-
command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: install-cni
image: quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-
command: [ "/bin/sh", "-c", "set -e -x; cp -f
/etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true;
do sleep 3600; done" ]
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg

On Wed, Oct 26, 2016 at 12:00 PM, Lucas Käldström notifications@github.com
wrote:

Docker on ARM is supported yes, but as you said, there's no need to
install it on HypriotOS where it is by default.

But you should use HypriotOS v1.0.1 otherwise it won't work


You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
#140 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAAP3MF1bNfvwWtK56ToV16NuHKN2d6lks5q35UlgaJpZM4Kghee
.

@nsteinmetz
Copy link

Hi all,

Back in k8s on arm (but not only) ! :-)

@luxas: giving a try to kubeadm on my RPI's too.

But you should use HypriotOS v1.0.1 otherwise it won't work

It's Hypriot 1.0.1+, exact ? So it should work with 1.1.1.

Also you need to set --use-kubernetes-version=v1.4.1 and --pod-network-cidr=10.244.0.0/16 as the page says

Can we use K8S 1.4.5 too ? @larmog wrote a blog post using 1.4.3

For pod, I understood it's required only if we plan to use flannel. @larmog mentionned that we could use weave. Is it supported or should we stick with flannel ?

Thanks,
Nicolas

@nsteinmetz
Copy link

nsteinmetz commented Oct 30, 2016

Answering myself, it works as is:

root@pico-master:~# kubeadm init --use-kubernetes-version=v1.4.5
<master/tokens> generated token: "xxx"
<master/pki> created keys and certificates in "/etc/kubernetes/pki"
<util/kubeconfig> created "/etc/kubernetes/kubelet.conf"
<util/kubeconfig> created "/etc/kubernetes/admin.conf"
<master/apiclient> created API client configuration
<master/apiclient> created API client, waiting for the control plane to become ready
<master/apiclient> all control plane components are healthy after 300.608383 seconds
<master/apiclient> waiting for at least one node to register and become ready
<master/apiclient> first node is ready after 4.548393 seconds
<master/discovery> created essential addon: kube-discovery, waiting for it to become ready
<master/discovery> kube-discovery is ready after 370.565099 seconds
<master/addons> created essential addon: kube-proxy
<master/addons> created essential addon: kube-dns

Kubernetes master initialised successfully!

You can now join any number of machines by running the following on each node:

kubeadm join --token xxx 192.168.4.165

But when trying to instlall weave, I got a strange error:

kubectl create -f https://raw.githubusercontent.com/kodbasen/weave-kube-arm/master/weave-daemonset.yaml
Error from server: error when creating "https://raw.githubusercontent.com/kodbasen/weave-kube-arm/master/weave-daemonset.yaml": dial tcp 127.0.0.1:2379: getsockopt: connection refused

Looking at containers:

$ docker ps
CONTAINER ID        IMAGE                                                         COMMAND                  CREATED              STATUS              PORTS               NAMES
cd5418ac240c        gcr.io/google_containers/kube-scheduler-arm:v1.4.5            "/usr/local/bin/kube-"   13 seconds ago       Up 13 seconds                           k8s_kube-scheduler.225addad_kube-scheduler-pico-master_kube-system_87a3f7a3c6b93863a1bb94f88c899398_d0b28cda
63653042d1a4        gcr.io/google_containers/etcd-arm:2.2.5                       "etcd --listen-client"   21 seconds ago       Up 17 seconds                           k8s_etcd.8cda97ea_etcd-pico-master_kube-system_286f1c9f6f34e719ddc002fa52767a2f_c7abf4fd
2723727fa622        gcr.io/google_containers/kube-controller-manager-arm:v1.4.5   "/usr/local/bin/kube-"   About a minute ago   Up About a minute                       k8s_kube-controller-manager.7e38629_kube-controller-manager-pico-master_kube-system_526eee21b7f5362563386efdd1e4d2a0_8d7fd70b
ee09bb4ca62c        gcr.io/google_containers/kube-proxy-arm:v1.4.5                "/usr/local/bin/kube-"   24 minutes ago       Up 24 minutes                           k8s_kube-proxy.2aedbb7c_kube-proxy-arm-i8i9l_kube-system_61046699-9ebf-11e6-b240-b827eb1bc7df_90739c83
0b94f94b37f5        gcr.io/google_containers/pause-arm:3.0                        "/pause"                 32 minutes ago       Up 31 minutes                           k8s_POD.da6fe110_kube-proxy-arm-i8i9l_kube-system_61046699-9ebf-11e6-b240-b827eb1bc7df_0ac06dd1
20b6033fe5c4        gcr.io/google_containers/kube-discovery-arm:1.0               "/usr/local/bin/kube-"   32 minutes ago       Up 32 minutes                           k8s_kube-discovery.dc22cdc3_kube-discovery-1943570393-mk3hj_kube-system_8406be58-9ebe-11e6-b240-b827eb1bc7df_7ae635ec
25d74cc199a8        gcr.io/google_containers/pause-arm:3.0                        "/pause"                 38 minutes ago       Up 37 minutes                           k8s_POD.da6fe110_kube-discovery-1943570393-mk3hj_kube-system_8406be58-9ebe-11e6-b240-b827eb1bc7df_3eb75af5
f67d2908ebf2        gcr.io/google_containers/kube-apiserver-arm:v1.4.5            "/usr/local/bin/kube-"   38 minutes ago       Up 38 minutes                           k8s_kube-apiserver.de6cd9e8_kube-apiserver-pico-master_kube-system_edba96113ad5d531890324939419aee9_3f04e3c3
700bc7f0cecb        gcr.io/google_containers/pause-arm:3.0                        "/pause"                 43 minutes ago       Up 42 minutes                           k8s_POD.da6fe110_kube-controller-manager-pico-master_kube-system_526eee21b7f5362563386efdd1e4d2a0_08288012
93693c4ef2c6        gcr.io/google_containers/pause-arm:3.0                        "/pause"                 43 minutes ago       Up 42 minutes                           k8s_POD.da6fe110_kube-apiserver-pico-master_kube-system_edba96113ad5d531890324939419aee9_7e498cfa
7c0f1e4d7d1f        gcr.io/google_containers/pause-arm:3.0                        "/pause"                 43 minutes ago       Up 42 minutes                           k8s_POD.da6fe110_etcd-pico-master_kube-system_286f1c9f6f34e719ddc002fa52767a2f_8f998986
ff04d59e7a0a        gcr.io/google_containers/pause-arm:3.0                        "/pause"                 43 minutes ago       Up 42 minutes                           k8s_POD.da6fe110_kube-scheduler-pico-master_kube-system_87a3f7a3c6b93863a1bb94f88c899398_e4a0cfda

If it should be accessible on localhost:2379, then some port directives seems to be missing.

Indeed:

NAME                 STATUS      MESSAGE                                                                                        ERROR
scheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: getsockopt: connection refused   
etcd-0               Unhealthy   Get http://127.0.0.1:2379/health: dial tcp 127.0.0.1:2379: getsockopt: connection refused      
controller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: getsockopt: connection refused   

Also normal that on a RPI3, load avegage is >4 most of the time (up to 6)

Also a strange thing I had was I had to reinstall kubectl as /usr/bin/kubectl was missing, whereas I have it on my other nodes when I just install kubeadm and other packages.

Should I restart from vanillia hypriotOS as it's my third try with erasing as the doc suggested my case:

systemctl stop kubelet;
docker rm -f -v $(docker ps -q);
find /var/lib/kubelet | xargs -n 1 findmnt -n -t tmpfs -o TARGET -T | uniq | xargs -r umount -v;
rm -r -f /etc/kubernetes /var/lib/kubelet /var/lib/etcd;
systemctl start kubelet;

@luxas
Copy link
Owner

luxas commented Oct 30, 2016

The problem is that the load is too high for the Pi.
This has been fixed with one PR I made.
I'd suggest that you upgrade your kubeadm by adding this to /etc/apt/sources.list.d/kubernetes.list:

deb http://apt.kubernetes.io/ kubernetes-xenial-unstable main

and do apt-get update && apt-get upgrade

See: kubernetes/kubernetes#33859

Can you make a new issue on this repo with all these findings and suggestions and we'll move discussion to there?

@nsteinmetz
Copy link

Ok I will ; thanks :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants