Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubedns 1.13.0 failed to start on v1.6.0-alpha.3 (s390x) #61

Closed
XiLongZheng opened this issue Feb 18, 2017 · 5 comments
Closed

kubedns 1.13.0 failed to start on v1.6.0-alpha.3 (s390x) #61

XiLongZheng opened this issue Feb 18, 2017 · 5 comments

Comments

@XiLongZheng
Copy link

Just created a new v1.6.0-alpha.3 cluster on s390x, tried to deploy kubedns to the cluster, but failed. See below kubectl get pods and describe pods out put. Also attached my kubedns-controller.yaml and my shell script to start kubernetes cluster in zip file.
kubedns-failed-to-start-s390x.zip

root@test-k8s-16-alpha3:/etc/kubernetes/server/bin# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
kube-dns-3552530395-h7s8k 2/3 CrashLoopBackOff 3 1m
root@test-k8s-16-alpha3:/etc/kubernetes/server/bin# kubectl describe pods kube-dns-3552530395-h7s8k -n kube-system
Name: kube-dns-3552530395-h7s8k
Namespace: kube-system
Node: 127.0.0.1/127.0.0.1
Start Time: Sat, 18 Feb 2017 13:54:06 +0000
Labels: k8s-app=kube-dns
pod-template-hash=3552530395
Status: Running
IP: 172.17.0.2
Controllers: ReplicaSet/kube-dns-3552530395
Containers:
kubedns:
Container ID: docker://3a3462b3d280c271f141292961654a70fabf0d6ae199c31739f37b4d84c5cd67
Image: gcr.io/google_containers/k8s-dns-kube-dns-s390x:1.13.0
Image ID: docker-pullable://gcr.io/google_containers/k8s-dns-kube-dns-s390x@sha256:49a499ddc7e5ad4ef317cb7a136b033e64f55c191b511926151e344e31fc418a
Ports: 10053/UDP, 10053/TCP, 10055/TCP
Args:
--domain=cluster.local.
--dns-port=10053
--config-dir=/kube-dns-config
--v=2
Limits:
memory: 170Mi
Requests:
cpu: 100m
memory: 70Mi
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Mon, 01 Jan 0001 00:00:00 +0000
Finished: Sat, 18 Feb 2017 13:54:50 +0000
Ready: False
Restart Count: 3
Liveness: http-get http://:10054/healthcheck/kubedns delay=60s timeout=5s period=10s #success=1 #failure=5
Readiness: http-get http://:8081/readiness delay=3s timeout=5s period=10s #success=1 #failure=3
Volume Mounts:
/kube-dns-config from kube-dns-config (rw)
Environment Variables from:
Environment Variables:
PROMETHEUS_PORT: 10055
dnsmasq:
Container ID: docker://480ed642a294e518821292cb8d74645035d317f59f200d23c5cae9aa9a6a0359
Image: gcr.io/google_containers/k8s-dns-dnsmasq-s390x:1.13.0
Image ID: docker-pullable://gcr.io/google_containers/k8s-dns-dnsmasq-s390x@sha256:1eb57c914d85af5a77a9af9632ad144106b3e12f68a8e8a734c5657c917753fd
Ports: 53/UDP, 53/TCP
Args:
--cache-size=1000
--server=/cluster.local/127.0.0.1#10053
--server=/in-addr.arpa/127.0.0.1#10053
--server=/ip6.arpa/127.0.0.1#10053
--log-facility=-
Requests:
cpu: 150m
memory: 10Mi
State: Running
Started: Sat, 18 Feb 2017 13:54:08 +0000
Ready: True
Restart Count: 0
Liveness: http-get http://:10054/healthcheck/dnsmasq delay=60s timeout=5s period=10s #success=1 #failure=5
Volume Mounts:
Environment Variables from:
Environment Variables:
sidecar:
Container ID: docker://90b09674ecc9ff3ba06701d0f5a12240cb7c685e4b8fb9430289199a22806173
Image: gcr.io/google_containers/k8s-dns-sidecar-s390x:1.13.0
Image ID: docker-pullable://gcr.io/google_containers/k8s-dns-sidecar-s390x@sha256:6b03af9d65be38542ff6df0c9a569e36e81aa9ee808dbef3a00b58d436455c02
Port: 10054/TCP
Args:
--v=2
--logtostderr
--probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
--probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
Requests:
cpu: 10m
memory: 20Mi
State: Running
Started: Sat, 18 Feb 2017 13:54:08 +0000
Ready: True
Restart Count: 0
Liveness: http-get http://:10054/metrics delay=60s timeout=5s period=10s #success=1 #failure=5
Volume Mounts:
Environment Variables from:
Environment Variables:
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
kube-dns-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: kube-dns
Optional: true
QoS Class: Burstable
Node-Selectors:
Tolerations: CriticalAddonsOnly=:Exists
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message


1m 1m 1 kubelet, 127.0.0.1 Normal SandboxReceived Pod sandbox received, it will be created.
1m 1m 1 default-scheduler Normal Scheduled Successfully assigned kube-dns-3552530395-h7s8k to 127.0.0.1
1m 1m 1 kubelet, 127.0.0.1 spec.containers{sidecar} Normal Created Created container with id 90b09674ecc9ff3ba06701d0f5a12240cb7c685e4b8fb9430289199a22806173
1m 1m 1 kubelet, 127.0.0.1 spec.containers{kubedns} Normal Created Created container with id 78672547a5404160b7cb483dda74487fe8e32d9a43657577df99b77af8e83042
1m 1m 1 kubelet, 127.0.0.1 spec.containers{kubedns} Normal Started Started container with id 78672547a5404160b7cb483dda74487fe8e32d9a43657577df99b77af8e83042
1m 1m 1 kubelet, 127.0.0.1 spec.containers{dnsmasq} Normal Pulled Container image "gcr.io/google_containers/k8s-dns-dnsmasq-s390x:1.13.0" already present on machine
1m 1m 1 kubelet, 127.0.0.1 spec.containers{dnsmasq} Normal Created Created container with id 480ed642a294e518821292cb8d74645035d317f59f200d23c5cae9aa9a6a0359
1m 1m 1 kubelet, 127.0.0.1 spec.containers{dnsmasq} Normal Started Started container with id 480ed642a294e518821292cb8d74645035d317f59f200d23c5cae9aa9a6a0359
1m 1m 1 kubelet, 127.0.0.1 spec.containers{sidecar} Normal Pulled Container image "gcr.io/google_containers/k8s-dns-sidecar-s390x:1.13.0" already present on machine
1m 1m 1 kubelet, 127.0.0.1 spec.containers{sidecar} Normal Started Started container with id 90b09674ecc9ff3ba06701d0f5a12240cb7c685e4b8fb9430289199a22806173
1m 1m 1 kubelet, 127.0.0.1 spec.containers{kubedns} Normal Created Created container with id 85fa416009ee095b281e4bdacb633346e4483c79e8727061662c7aaf25befe0e
1m 1m 1 kubelet, 127.0.0.1 spec.containers{kubedns} Normal Started Started container with id 85fa416009ee095b281e4bdacb633346e4483c79e8727061662c7aaf25befe0e
1m 1m 3 kubelet, 127.0.0.1 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubedns pod=kube-dns-3552530395-h7s8k_kube-system(b6b4170e-f5e1-11e6-9192-fa163ee87680)"

1m 1m 1 kubelet, 127.0.0.1 spec.containers{kubedns} Normal Created Created container with id 96ef011f27e7b00a8cdff6842228246fa29854de1f9c1a62ec0a3166cc8f1542
1m 1m 1 kubelet, 127.0.0.1 spec.containers{kubedns} Normal Started Started container with id 96ef011f27e7b00a8cdff6842228246fa29854de1f9c1a62ec0a3166cc8f1542
59s 53s 2 kubelet, 127.0.0.1 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 20s restarting failed container=kubedns pod=kube-dns-3552530395-h7s8k_kube-system(b6b4170e-f5e1-11e6-9192-fa163ee87680)"

1m 40s 4 kubelet, 127.0.0.1 spec.containers{kubedns} Normal Pulled Container image "gcr.io/google_containers/k8s-dns-kube-dns-s390x:1.13.0" already present on machine
40s 40s 1 kubelet, 127.0.0.1 spec.containers{kubedns} Normal Created Created container with id 3a3462b3d280c271f141292961654a70fabf0d6ae199c31739f37b4d84c5cd67
39s 39s 1 kubelet, 127.0.0.1 spec.containers{kubedns} Normal Started Started container with id 3a3462b3d280c271f141292961654a70fabf0d6ae199c31739f37b4d84c5cd67
1m 6s 9 kubelet, 127.0.0.1 spec.containers{kubedns} Warning BackOff Back-off restarting failed container
38s 6s 4 kubelet, 127.0.0.1 Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubedns" with CrashLoopBackOff: "Back-off 40s restarting failed container=kubedns pod=kube-dns-3552530395-h7s8k_kube-system(b6b4170e-f5e1-11e6-9192-fa163ee87680)"

kubectl version
Client Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.3", GitCommit:"5802799e56c7fcd1638e5848a13c5f3b0b1479ab", GitTreeState:"clean", BuildDate:"2017-02-16T19:27:36Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/s390x"}
Server Version: version.Info{Major:"1", Minor:"6+", GitVersion:"v1.6.0-alpha.3", GitCommit:"5802799e56c7fcd1638e5848a13c5f3b0b1479ab", GitTreeState:"clean", BuildDate:"2017-02-16T19:17:01Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/s390x"}

@cmluciano
Copy link

Can you post the container logs too?

@XiLongZheng
Copy link
Author

XiLongZheng commented Feb 20, 2017

like the problem is from k8s-dns-sidecar-s390x:1.13.0 image, need to add nobody group into /etc/group file, in short, append below line to file /etc/group of image k8s-dns-sidecar-s390x:1.13.0

nobody:x:101:

Error message from the container:
Error response from daemon: {"message":"linux spec user: unable to find group nobody: no matching entries in group file"}

@bowei
Copy link
Member

bowei commented Feb 21, 2017

please open a PR to fix

@gajju26
Copy link
Contributor

gajju26 commented Mar 22, 2017

Hello XiLongZheng

Recently I tested the new version of kube dns 1.14.1 and there I did not see any problem for all the images viz. dns, dnsmasq and sidecar so can you test with the latest version. I came across this specific problem for "exechealthz" available for s390x. So can you confirm this issue is with sidecar or exechealthz?

@XiLongZheng
Copy link
Author

XiLongZheng commented Apr 6, 2017

We just got a chance to try kube dns 1.14.1 with Kubernetes 1.6.1 on s390x, worked perfect now. Thanks for the great help everyone provided! Closing this now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants