Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Connection refused for apiserver #9

Closed
yasassri opened this issue May 14, 2017 · 3 comments
Closed

[BUG] Connection refused for apiserver #9

yasassri opened this issue May 14, 2017 · 3 comments

Comments

@yasassri
Copy link

yasassri commented May 14, 2017

Back Ground

I followed the Kubernetes Guide to setup a basic K8S cluster with default parameters, except for following two options added to kube-apiserver.yaml

  - --insecure-bind-address=0.0.0.0
   - --insecure-port=8090 

My full kube-apiserver.yaml is as follows.

apiVersion: v1
kind: Pod
metadata:
  name: kube-apiserver
  namespace: kube-system
spec:
  hostNetwork: true
  containers:
  - name: kube-apiserver
    image: quay.io/coreos/hyperkube:v1.6.1_coreos.0
    command:
    - /hyperkube
    - apiserver
    - --bind-address=0.0.0.0
    - --etcd-servers=http://192.168.57.13:2379
    - --allow-privileged=true
    - --service-cluster-ip-range=10.3.0.0/24
    - --secure-port=443
    - --insecure-bind-address=0.0.0.0
    - --insecure-port=8090
    - --advertise-address=192.168.57.130
    - --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota
    - --tls-cert-file=/etc/kubernetes/ssl/apiserver.pem
    - --tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --client-ca-file=/etc/kubernetes/ssl/ca.pem
    - --service-account-key-file=/etc/kubernetes/ssl/apiserver-key.pem
    - --runtime-config=extensions/v1beta1/networkpolicies=true
    - --anonymous-auth=false
    livenessProbe:
      httpGet:
        host: 127.0.0.1
        port: 8080
        path: /healthz
      initialDelaySeconds: 15
      timeoutSeconds: 15
    ports:
    - containerPort: 443
      hostPort: 443
      name: https
    - containerPort: 8080
      hostPort: 8080
      name: local
    volumeMounts:
    - mountPath: /etc/kubernetes/ssl
      name: ssl-certs-kubernetes
      readOnly: true
    - mountPath: /etc/ssl/certs
      name: ssl-certs-host
      readOnly: true
  volumes:
  - hostPath:
 - hostPath:
      path: /etc/kubernetes/ssl
    name: ssl-certs-kubernetes
  - hostPath:
      path: /usr/share/ca-certificates
    name: ssl-certs-host

Now when I start the kubelet I see the following errors.

This is with systemctl status kubelet

�~W~O kubelet.service
   Loaded: loaded (/etc/systemd/system/kubelet.service; disabled; vendor preset: disabled)
   Active: active (running) since Sun 2017-05-14 08:54:41 UTC; 4min 31s ago
  Process: 14968 ExecStartPre=/usr/bin/mkdir -p /opt/cni/bin (code=exited, status=0/SUCCESS)
  Process: 14956 ExecStartPre=/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid (code=exited, status=254)
  Process: 14952 ExecStartPre=/usr/bin/mkdir -p /var/log/containers (code=exited, status=0/SUCCESS)
  Process: 14943 ExecStartPre=/usr/bin/mkdir -p /etc/kubernetes/manifests (code=exited, status=0/SUCCESS)
 Main PID: 14972 (kubelet)
    Tasks: 16 (limit: 32768)
   Memory: 1.3G
      CPU: 40.662s
   CGroup: /system.slice/kubelet.service
           �~T~\�~T~@14972 /kubelet --api-servers=http://127.0.0.1:8080 --register-schedulable=false --cni-conf-dir=/etc/kubernetes/cni/net.d --network-plugin=cni --container-runtime=docker --allow-privileged=true --pod-manifest-path=/etc/kubernetes/manifests --hostname-override=192.168.57.130 --cluster_dns=10.3.0.10 --cluster_domain=cluster.local
           �~T~T�~T~@15165 journalctl -k -f

May 14 08:59:10 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:10.170585   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:10 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:10.171555   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:10 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:10.172413   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:11 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:11.171287   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:11 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:11.172360   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:11 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:11.173376   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:12 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:12.169077   14972 eviction_manager.go:214] eviction manager: unexpected err: failed GetNode: node '192.168.57.130' not found
May 14 08:59:12 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:12.171928   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:390: Failed to list *v1.Node: Get http://127.0.0.1:8080/api/v1/nodes?fieldSelector=metadata.name%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:12 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:12.172765   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get http://127.0.0.1:8080/api/v1/pods?fieldSelector=spec.nodeName%3D192.168.57.130&resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused
May 14 08:59:12 yasassri-test-b9064eab-d104-4183-b42f-6cb5e120ca67.novalocal kubelet-wrapper[14972]: E0514 08:59:12.173750   14972 reflector.go:190] k8s.io/kubernetes/pkg/kubelet/kubelet.go:382: Failed to list *v1.Service: Get http://127.0.0.1:8080/api/v1/services?resourceVersion=0: dial tcp 127.0.0.1:8080: getsockopt: connection refused

Other Logs from /var/log/pods

{"log":"E0514 09:02:28.606961       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ServiceAccount: Get https://localhost:443/api/v1/serviceaccounts?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.60733353Z"}
{"log":"E0514 09:02:28.607194       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *storage.StorageClass: Get https://localhost:443/apis/storage.k8s.io/v1beta1/storageclasses?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.607413819Z"}
{"log":"E0514 09:02:28.607719       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.LimitRange: Get https://localhost:443/api/v1/limitranges?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.607890803Z"}
{"log":"E0514 09:02:28.609090       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.ResourceQuota: Get https://localhost:443/api/v1/resourcequotas?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.609334802Z"}
{"log":"E0514 09:02:28.617184       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Secret: Get https://localhost:443/api/v1/secrets?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.617450991Z"}
{"log":"E0514 09:02:28.628247       1 reflector.go:201] k8s.io/kubernetes/pkg/client/informers/informers_generated/internalversion/factory.go:70: Failed to list *api.Namespace: Get https://localhost:443/api/v1/namespaces?resourceVersion=0: dial tcp [::1]:443: getsockopt: connection refused\n","stream":"stderr","time":"2017-05-14T09:02:28.628501464Z"}
{"log":"[restful] 2017/05/14 09:02:28 log.go:30: [restful/swagger] listing is available at https://192.168.57.130:443/swaggerapi/\n","stream":"stderr","time":"2017-05-14T09:02:28.657301606Z"}
{"log":"[restful] 2017/05/14 09:02:28 log.go:30: [restful/swagger] https://192.168.57.130:443/swaggerui/ is mapped to folder /swagger-ui/\n","stream":"stderr","time":"2017-05-14T09:02:28.657350995Z"}
{"log":"I0514 09:02:28.863874       1 serve.go:79] Serving securely on 0.0.0.0:443\n","stream":"stderr","time":"2017-05-14T09:02:28.864169072Z"}
{"log":"I0514 09:02:28.864109       1 serve.go:94] Serving insecurely on 0.0.0.0:8090\n","stream":"stderr","time":"2017-05-14T09:02:28.864209629Z"}
{"log":"E0514 09:02:29.349333       1 status.go:62] apiserver received an error that is not an metav1.Status: rpc error: code = 13 desc = transport is closing\n","stream":"stderr","time":"2017-05-14T09:02:29.349625692Z"}
{"log":"E0514 09:02:29.381326       1 client_ca_hook.go:58] rpc error: code = 13 desc = transport is closing\n","stream":"stderr","time":"2017-05-14T09:02:29.381658997Z"}

I also came accross kubernetes/kubeadm#226 as well. I'm not sure whether its related. Please let me know if you need more information.

@yasassri
Copy link
Author

Everything started working after setting the following parameters in kube-apiserver.yaml

--storage-backend=etcd2
--storage-media-type=application/json

@adarshaj
Copy link

This helped us too! Thanks a lot!

@aayushSID
Copy link

Everything started working after setting the following parameters in kube-apiserver.yaml
--storage-backend=etcd2`
--storage-media-type=application/json

I am facing the same issue but when I try this workaround the yaml file gets rewritten during the kubeadm init step. Any idea how to resolve this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants