Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Should mention the need of allow-privileged on kubelet configuration #1123

Closed
tmjd opened this issue Sep 21, 2017 · 1 comment · Fixed by #2270
Closed

Should mention the need of allow-privileged on kubelet configuration #1123

tmjd opened this issue Sep 21, 2017 · 1 comment · Fixed by #2270

Comments

@tmjd
Copy link
Member

tmjd commented Sep 21, 2017

The Kubernetes kubelet needs the allow-privileged flag when running a hosted Calico to allow felix to make the iptables and sysctl changes it needs to make, this requirement is not currently in the K8s documentaiton (that I could find). At a minimum it should be added to this section https://docs.projectcalico.org/v2.5/getting-started/kubernetes/installation/integration#configuring-the-kubelet.

Expected Behavior

Docs should mention the need for allow-privileged flag for the kubelet

Current Behavior

They don't mention it.

Possible Solution

Fix it. 😄

@defo89
Copy link

defo89 commented Jul 2, 2018

Had same experience when working with docs Installing Calico for policy and networking .

I have bootstrapped worker nodes manually without allow-privileged and after running
kubectl apply -f \ https://docs.projectcalico.org/v3.1/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml
Calico pods were not starting.

kubectl describe pod calico-node-cxtv2 --namespace kube-system

Name:           calico-node-cxtv2
Namespace:      kube-system
Node:           k8s-worker2/10.10.10.22
Start Time:     Sun, 01 Jul 2018 16:40:28 +0300
Labels:         controller-revision-hash=1808776410
                k8s-app=calico-node
                pod-template-generation=1
Annotations:    scheduler.alpha.kubernetes.io/critical-pod=
Status:         Pending
Reason:         Forbidden
Message:        pod with UID "50f6639a-7d34-11e8-be40-000c29e99a17" specified privileged container, but is disallowed
IP:             10.10.10.22
Controlled By:  DaemonSet/calico-node
Containers:
  calico-node:
    Container ID:
    Image:          quay.io/calico/node:v3.1.3
    Image ID:
    Port:           <none>
    State:          Waiting
      Reason:       Blocked
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:      250m
    Liveness:   http-get http://:9099/liveness delay=10s timeout=1s period=10s #success=1 #failure=6
    Readiness:  http-get http://:9099/readiness delay=0s timeout=1s period=10s #success=1 #failure=3
    Environment:
      DATASTORE_TYPE:                     kubernetes
      FELIX_LOGSEVERITYSCREEN:            info
      CLUSTER_TYPE:                       k8s,bgp
      CALICO_DISABLE_FILE_LOGGING:        true
      FELIX_DEFAULTENDPOINTTOHOSTACTION:  ACCEPT
      FELIX_IPV6SUPPORT:                  false
      FELIX_IPINIPMTU:                    1440
      WAIT_FOR_DATASTORE:                 true
      CALICO_IPV4POOL_CIDR:               192.168.0.0/16
      CALICO_IPV4POOL_IPIP:               Always
      FELIX_IPINIPENABLED:                true
      FELIX_TYPHAK8SSERVICENAME:          <set to the key 'typha_service_name' of config map 'calico-config'>  Optional: false
      NODENAME:                            (v1:spec.nodeName)
      IP:                                 autodetect
      FELIX_HEALTHENABLED:                true
    Mounts:
      /lib/modules from lib-modules (ro)
      /var/lib/calico from var-lib-calico (rw)
      /var/run/calico from var-run-calico (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-cshsw (ro)
  install-cni:
    Container ID:
    Image:         quay.io/calico/cni:v3.1.3
    Image ID:
    Port:          <none>
    Command:
      /install-cni.sh
    State:          Waiting
      Reason:       Blocked
    Ready:          False
    Restart Count:  0
    Environment:
      CNI_CONF_NAME:         10-calico.conflist
      CNI_NETWORK_CONFIG:    <set to the key 'cni_network_config' of config map 'calico-config'>  Optional: false
      KUBERNETES_NODE_NAME:   (v1:spec.nodeName)
    Mounts:
      /host/etc/cni/net.d from cni-net-dir (rw)
      /host/opt/cni/bin from cni-bin-dir (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from calico-node-token-cshsw (ro)
Conditions:
  Type           Status
  Initialized    True
  Ready          False
  PodScheduled   True
Volumes:
  lib-modules:
    Type:  HostPath (bare host directory volume)
    Path:  /lib/modules
  var-run-calico:
    Type:  HostPath (bare host directory volume)
    Path:  /var/run/calico
  var-lib-calico:
    Type:  HostPath (bare host directory volume)
    Path:  /var/lib/calico
  cni-bin-dir:
    Type:  HostPath (bare host directory volume)
    Path:  /opt/cni/bin
  cni-net-dir:
    Type:  HostPath (bare host directory volume)
    Path:  /etc/cni/net.d
  calico-node-token-cshsw:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  calico-node-token-cshsw
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     :NoSchedule
                 :NoExecute
                 :NoSchedule
                 :NoExecute
                 CriticalAddonsOnly
                 node.kubernetes.io/disk-pressure:NoSchedule
                 node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:          <none>

After adding --allow-privileged=true to /etc/systemd/system/kubelet.service unit file, pods have successfully started.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants