Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubelet is not starting pod form manifest. #41724

Closed
pradeeppandeyy opened this issue Feb 19, 2017 · 3 comments
Closed

kubelet is not starting pod form manifest. #41724

pradeeppandeyy opened this issue Feb 19, 2017 · 3 comments

Comments

@pradeeppandeyy
Copy link

Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):

What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):


Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.7", GitCommit:"92b4f971662de9d8770f8dcd2ee01ec226a6f6c0", GitTreeState:"clean", BuildDate:"2016-12-10T04:49:33Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Unable to connect to the server: read tcp 127.0.0.1:51254->127.0.0.1:8080: read: connection reset by peer

Environment:

  • Cloud provider or hardware configuration:

  • OS (e.g. from /etc/os-release):
    RHEL 7.3

  • Kernel (e.g. uname -a):
    Linux master1 3.10.0-514.el7.x86_64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Wed Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools:
    Installed from binary

  • Others:

What happened:

kubelet is not starting pod form manifest.

What you expected to happen:
am using below service file with kubernetes version-1.4.7

[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/bin/kubelet --allow-privileged=true --api-servers=https://10.25.0.20:6443,https://10.25.0.21:6443 --cloud-provider= --cluster-dns=10.32.0.10 --config=/etc/kubernetes/manifests --cluster-domain=cluster.local --configure-cbr0=true --container-runtime=docker --docker=unix:///var/run/docker.sock --network-plugin=kubenet --kubeconfig=/var/lib/kubelet/kubeconfig --reconcile-cidr=true --serialize-image-pulls=false --tls-cert-file=/var/lib/kubernetes/kubernetes.pem --tls-private-key-file=/var/lib/kubernetes/kubernetes-key.pem --v=2

Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

@baguasquirrel
Copy link

I'm having similar issues. I've tested the manifests on a working cluster, and they should be working.

@baguasquirrel
Copy link

Okay, I think I've found the issue.

The containers aren't coming up until they get a pod subnet. They either get this from kube-controller-manager, or you have to specify with --pod-cidr. But for the controllers, you won't have kube-controller-manager to talk to, nor does it matter because those controller kubelets aren't going to be schedulable anyway (--register-schedulable should be false for them), so you have to manually specify the --pod-cidr.

@pradeeppandeyy
Copy link
Author

Hi Arthur,

Thanks , Solution is working for me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants