New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Failed to create SubnetManager - x509 does not contain any IP SANs #1021

Open
bladerunner512 opened this Issue Jul 16, 2018 · 3 comments

Comments

Projects
None yet
3 participants
@bladerunner512

bladerunner512 commented Jul 16, 2018

When using separate secure etcd cluster, Flannel pod fails to run

Expected Behavior

Flannel pod should start up successfully

Current Behavior

$ kubectl -n kube-system logs kube-flannel-ds-pz58n
I0716 20:11:18.907759 1 main.go:475] Determining IP address of default interface
I0716 20:11:18.908528 1 main.go:488] Using interface with name eth0 and address xx.xxx.xxx.174
I0716 20:11:18.908549 1 main.go:505] Defaulting external address to interface address (xx.xxx.xxx.174)
E0716 20:11:19.238180 1 main.go:232] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-flannel-ds-pz58n': Get https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-pz58n: x509: cannot validate certificate for 10.96.0.1 because it doesn't contain any IP SANs

Output from kubeadm init indicates that apiserver certs are correct:
...
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [node1624 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 xx.xxx.xxx.174 127.0.0.1]
...

Possible Solution

Kubernetes API Server is on port 6443 not 443? Could this be the issue since error indicates getting pod spec from https://10.96.0.1:443/api/v1/namespaces/kube-system/pods/kube-flannel-ds-pz58n?

Steps to Reproduce (for bugs)

  1. kubeadm init --config /etc/kubernetes/k8s_api_conf.yaml
    k8s_api_conf.yaml:
    apiVersion: kubeadm.k8s.io/v1alpha1
    kind: MasterConfiguration
    kubernetesVersion: v1.10.0
    api:
    advertiseAddress: xx.xxx.xxx.xxx
    etcd:
    endpoints:
  1. Deploy Flannel.yaml with edit to add etcd certs:
    ... containers:

Context

Want to run Flannel using separate/secure ETCD cluster

Your Environment

  • Flannel version: v0.10.0-amd64
  • Backend used (e.g. vxlan or udp): vxlan
  • Etcd version: 8.2.12
  • Kubernetes version (if used): v1.10
  • Operating System and version: Ubuntu 16.04
@pytomtoto

This comment has been minimized.

@bladerunner512

This comment has been minimized.

bladerunner512 commented Jul 19, 2018

Already have rbac applied. This error only occurs when using separate secure etcd cluster, when etcd is on master node Flannel works. Etcd also works fine outside of cluster and inside, i.e. multiple masters come up (apiserver, controller, proxy and scheduler) with no errors.

kubelet log has apiserver unauthorized error:

kubelet[60238]: E0719 10:15:34.603650 60238 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unauthorized
kubelet[60238]: E0719 10:15:34.604275 60238 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Unauthorized
kubelet[60238]: E0719 10:15:34.605166 60238 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Unauthorized


kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
rules:

  • apiGroups:
    • ""
      resources:
    • pods
      verbs:
    • get
  • apiGroups:
    • ""
      resources:
    • nodes
      verbs:
    • list
    • watch
  • apiGroups:
    • ""
      resources:
    • nodes/status
      verbs:
    • patch

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:

  • kind: ServiceAccount
    name: flannel
    namespace: kube-system

apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system


kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}


apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
template:
metadata:
labels:
tier: node
app: flannel
spec:
hostNetwork: true
nodeSelector:
beta.kubernetes.io/arch: amd64
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni
image: quay.io/coreos/flannel:v0.10.0-amd64
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
image: quay.io/coreos/flannel:v0.10.0-amd64
command:
- /opt/bin/flanneld
args:
- --iface=eth0
- --ip-masq
- --kube-subnet-mgr
- --etcd-endpoints=https://127.0.0.1:4242
- --etcd-keyfile=/etc/kubernetes/pki/etcd/client-key.pem
- --etcd-certfile=/etc/kubernetes/pki/etcd/client.pem
- --etcd-cafile=/etc/kubernetes/pki/etcd/ca.pem
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: true
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run
- name: flannel-cfg
mountPath: /etc/kube-flannel/
volumes:
- name: run
hostPath:
path: /run
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg

@FengyunPan2

This comment has been minimized.

FengyunPan2 commented Nov 7, 2018

met this issue too, any thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment