Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8S 1.7 is failing to start on RHEL 7.3 with Native Docker 1.10 #9348

Closed
galal-hussein opened this issue Jul 14, 2017 · 3 comments
Closed

K8S 1.7 is failing to start on RHEL 7.3 with Native Docker 1.10 #9348

galal-hussein opened this issue Jul 14, 2017 · 3 comments
Assignees
Labels
area/kubernetes kind/bug Issues that are defects reported by users or that we know have reached a real release
Milestone

Comments

@galal-hussein
Copy link
Contributor

Rancher versions:
rancher/server: v1.6.3
kubernetes (if applicable): rancher/k8s:v1.7.0-rancher3

Docker version: (docker version,docker info preferred)
Native Docker 1.10.3 (docker-1.10.3-59.el7.x86_64)
Operating system and kernel: (cat /etc/os-release, uname -r preferred)
RHEL 7.3 with SELinux enforced
Type/provider of hosts: (VirtualBox/Bare-metal/AWS/GCE/DO)
AWS
Setup details: (single node rancher vs. HA rancher, internal DB vs. external DB)
single node rancher
Environment Template: (Cattle/Kubernetes/Swarm/Mesos)
kubernetes
Steps to Reproduce:

  • Deploy kubernetes 1.7

Results:
Addons and new pods won't start

Name:		tiller-deploy-737598192-dvwhq
...............
Reason			Message
  ---------	--------	-----	----							-------------	--------	------			-------
  5m		5m		4	{default-scheduler }							Warning		FailedScheduling	no nodes available to schedule pods
$kubectl get nodes
NAME                                           STATUS    AGE
ip-172-16-100-135.us-west-2.compute.internal   Ready     9m
ip-172-16-100-225.us-west-2.compute.internal   Ready     9m
ip-172-16-100-24.us-west-2.compute.internal    Ready     9m

Kubelet Logs:

7/14/2017 3:12:49 PME0714 12:12:49.565953   19341 kuberuntime_manager.go:618] createPodSandbox for pod "heapster-4285517626-x4xbx_kube-system(667efce8-688c-11e7-bb67-027b0bd08231)" failed: rpc error: code = 2 desc = failed to create a sandbox for pod "heapster-4285517626-x4xbx": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
7/14/2017 3:12:49 PME0714 12:12:49.565979   19341 pod_workers.go:182] Error syncing pod 667efce8-688c-11e7-bb67-027b0bd08231 ("heapster-4285517626-x4xbx_kube-system(667efce8-688c-11e7-bb67-027b0bd08231)"), skipping: failed to "CreatePodSandbox" for "heapster-4285517626-x4xbx_kube-system(667efce8-688c-11e7-bb67-027b0bd08231)" with CreatePodSandboxError: "CreatePodSandbox for pod \"heapster-4285517626-x4xbx_kube-system(667efce8-688c-11e7-bb67-027b0bd08231)\" failed: rpc error: code = 2 desc = failed to create a sandbox for pod \"heapster-4285517626-x4xbx\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\""
7/14/2017 3:12:50 PMI0714 12:12:50.553239   19341 kuberuntime_manager.go:457] Container {Name:tiller Image:gcr.io/kubernetes-helm/tiller:v2.3.0 Command:[] Args:[] WorkingDir: Ports:[{Name:tiller HostPort:0 ContainerPort:44134 Protocol:TCP HostIP:}] EnvFrom:[] Env:[] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:io-rancher-system-token-b0xns ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:44135,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:1,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readiness,Port:44135,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:1,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
7/14/2017 3:12:50 PME0714 12:12:50.553349   19341 kubelet_network.go:139] Search Line limits were exceeded, some dns names have been omitted, the applied search line is: kube-system.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal
7/14/2017 3:12:50 PMI0714 12:12:50.557916   19341 kuberuntime_manager.go:457] Container {Name:grafana Image:gcr.io/google_containers/heapster-grafana-amd64:v4.0.2 Command:[] Args:[] WorkingDir: Ports:[{Name: HostPort:0 ContainerPort:3000 Protocol:TCP HostIP:}] EnvFrom:[] Env:[{Name:INFLUXDB_HOST Value:monitoring-influxdb ValueFrom:nil} {Name:INFLUXDB_SERVICE_URL Value:http://monitoring-influxdb:8086 ValueFrom:nil} {Name:GRAFANA_PORT Value:3000 ValueFrom:nil} {Name:GF_AUTH_BASIC_ENABLED Value:false ValueFrom:nil} {Name:GF_AUTH_ANONYMOUS_ENABLED Value:true ValueFrom:nil} {Name:GF_AUTH_ANONYMOUS_ORG_ROLE Value:Admin ValueFrom:nil} {Name:GF_SERVER_ROOT_URL Value:/ ValueFrom:nil}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:grafana-storage ReadOnly:false MountPath:/var SubPath:} {Name:io-rancher-system-token-b0xns ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
7/14/2017 3:12:50 PME0714 12:12:50.558005   19341 kubelet_network.go:139] Search Line limits were exceeded, some dns names have been omitted, the applied search line is: kube-system.svc.cluster.local svc.cluster.local cluster.local kubelet.kubernetes.rancher.internal kubernetes.rancher.internal rancher.internal
7/14/2017 3:12:50 PME0714 12:12:50.571159   19341 remote_runtime.go:91] RunPodSandbox from runtime service failed: rpc error: code = 2 desc = failed to create a sandbox for pod "tiller-deploy-737598192-dvwhq": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
7/14/2017 3:12:50 PME0714 12:12:50.571184   19341 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "tiller-deploy-737598192-dvwhq_kube-system(6683fadf-688c-11e7-bb67-027b0bd08231)" failed: rpc error: code = 2 desc = failed to create a sandbox for pod "tiller-deploy-737598192-dvwhq": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
7/14/2017 3:12:50 PME0714 12:12:50.571196   19341 kuberuntime_manager.go:618] createPodSandbox for pod "tiller-deploy-737598192-dvwhq_kube-system(6683fadf-688c-11e7-bb67-027b0bd08231)" failed: rpc error: code = 2 desc = failed to create a sandbox for pod "tiller-deploy-737598192-dvwhq": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
7/14/2017 3:12:50 PME0714 12:12:50.571222   19341 pod_workers.go:182] Error syncing pod 6683fadf-688c-11e7-bb67-027b0bd08231 ("tiller-deploy-737598192-dvwhq_kube-system(6683fadf-688c-11e7-bb67-027b0bd08231)"), skipping: failed to "CreatePodSandbox" for "tiller-deploy-737598192-dvwhq_kube-system(6683fadf-688c-11e7-bb67-027b0bd08231)" with CreatePodSandboxError: "CreatePodSandbox for pod \"tiller-deploy-737598192-dvwhq_kube-system(6683fadf-688c-11e7-bb67-027b0bd08231)\" failed: rpc error: code = 2 desc = failed to create a sandbox for pod \"tiller-deploy-737598192-dvwhq\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\""
7/14/2017 3:12:50 PME0714 12:12:50.571258   19341 remote_runtime.go:91] RunPodSandbox from runtime service failed: rpc error: code = 2 desc = failed to create a sandbox for pod "monitoring-grafana-3552275057-mn6bs": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
7/14/2017 3:12:50 PME0714 12:12:50.571271   19341 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "monitoring-grafana-3552275057-mn6bs_kube-system(667b4974-688c-11e7-bb67-027b0bd08231)" failed: rpc error: code = 2 desc = failed to create a sandbox for pod "monitoring-grafana-3552275057-mn6bs": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
7/14/2017 3:12:50 PME0714 12:12:50.571281   19341 kuberuntime_manager.go:618] createPodSandbox for pod "monitoring-grafana-3552275057-mn6bs_kube-system(667b4974-688c-11e7-bb67-027b0bd08231)" failed: rpc error: code = 2 desc = failed to create a sandbox for pod "monitoring-grafana-3552275057-mn6bs": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as "xxx.slice"
7/14/2017 3:12:50 PME0714 12:12:50.571303   19341 pod_workers.go:182] Error syncing pod 667b4974-688c-11e7-bb67-027b0bd08231 ("monitoring-grafana-3552275057-mn6bs_kube-system(667b4974-688c-11e7-bb67-027b0bd08231)"), skipping: failed to "CreatePodSandbox" for "monitoring-grafana-3552275057-mn6bs_kube-system(667b4974-688c-11e7-bb67-027b0bd08231)" with CreatePodSandboxError: "CreatePodSandbox for pod \"monitoring-grafana-3552275057-mn6bs_kube-system(667b4974-688c-11e7-bb67-027b0bd08231)\" failed: rpc error: code = 2 desc = failed to create a sandbox for pod \"monitoring-grafana-3552275057-mn6bs\": Error response from daemon: cgroup-parent for systemd cgroup should be a valid slice named as \"xxx.slice\""
@galal-hussein galal-hussein added area/kubernetes kind/bug Issues that are defects reported by users or that we know have reached a real release labels Jul 14, 2017
@galal-hussein galal-hussein added this to the July 2017 milestone Jul 14, 2017
@alena1108 alena1108 assigned alena1108 and sangeethah and unassigned joshwget Jul 14, 2017
@alena1108
Copy link

@sangeethah bug can be verified using rancher/k8s:v1.7.0-rancher3 k8s image, available in catalog branch corresponding to 1.7

@alena1108
Copy link

@sangeethah ignore my comment, I was thinking another issue. Will work on this one.

@alena1108
Copy link

Closing as we support only docker 1.12.x for k8s. @galal-hussein

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubernetes kind/bug Issues that are defects reported by users or that we know have reached a real release
Projects
None yet
Development

No branches or pull requests

4 participants