Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ip_forward is not enabled #31

Closed
dhoard opened this issue Sep 11, 2019 · 5 comments
Closed

ip_forward is not enabled #31

dhoard opened this issue Sep 11, 2019 · 5 comments

Comments

@dhoard
Copy link

dhoard commented Sep 11, 2019

Synopsis

I am following the article at https://blog.kontena.io/akrobateo-general-purpose-loadbalancer-for-k8s/ to setup Akrobateo on my local Kubernetes cluster (bare metal) installed via RKE.

I have removed the default installed RKE ingress-nginx namespace and controller.

I applied 01_role.yaml, 02_service_account.yaml, 03_role_binding.yaml, and 04_operator.yaml without any issues.

I deployed my sample application.

Expected Behavior

Akrobateo should expose the application externally.

> kubectl get services

... should show an external IP address for each of my nodes.

Actual Behavior

Akrobateo pods failed to start. The logs show ip_forward is not enabled.

Looking at akrobateo/lb-image/entrypoint.sh this will happen if cat /proc/sys/net/ipv4/ip_forward doesn't return 0.

I have confirmed on all 3 nodes that...

>cat /proc/sys/net/ipv4/ip_forward
1
> kubectl get services
NAME          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
hello-world   LoadBalancer   10.43.140.99   <pending>     8080:32766/TCP   26m
kubernetes    ClusterIP      10.43.0.1      <none>        443/TCP          122m
@jnummelin
Copy link
Contributor

jnummelin commented Sep 12, 2019

Could you double check that the LB pods are created as expected with kubectl get pod <lb-pod-name> -o yaml, just to make sure all the needed bits are there.

If those still look as expected, maybe try to run similar pod as the LB pods would be and see what cat /proc/sys/net/ipv4/ip_forward actually dumps out. If ip_forward is enabled on the host, I don't immediately see why it would fail on pod with NET_ADMIN capability set.

@dhoard
Copy link
Author

dhoard commented Sep 12, 2019

@jnummelin I see where ip_forward is enabled ...

  initContainers:
  - command:
    - sh
    - -c
    - sysctl -w net.ipv4.ip_forward=1

Here is the output for one of the pods ...

apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/podIP: 10.42.2.18/32
  creationTimestamp: "2019-09-12T14:13:28Z"
  generateName: akrobateo-tomcat-test-
  labels:
    akrobateo.kontena.io/svcname: tomcat-test
    app: akrobateo-tomcat-test
    controller-revision-hash: 6fb4d87bb6
    pod-template-generation: "1"
  name: akrobateo-tomcat-test-42wkq
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: DaemonSet
    name: akrobateo-tomcat-test
    uid: 7e193faa-d567-11e9-b711-5254007875e7
  resourceVersion: "157713"
  selfLink: /api/v1/namespaces/default/pods/akrobateo-tomcat-test-42wkq
  uid: 7e26d155-d567-11e9-b711-5254007875e7
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchFields:
          - key: metadata.name
            operator: In
            values:
            - 172.20.1.156
  containers:
  - env:
    - name: SRC_PORT
      value: "80"
    - name: DEST_PROTO
      value: TCP
    - name: DEST_PORT
      value: "80"
    - name: DEST_IP
      value: 10.43.177.188
    image: registry.pharos.sh/kontenapharos/akrobateo-lb:latest
    imagePullPolicy: IfNotPresent
    name: http
    ports:
    - containerPort: 80
      hostPort: 80
      name: http
      protocol: TCP
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-tjrs9
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  initContainers:
  - command:
    - sh
    - -c
    - sysctl -w net.ipv4.ip_forward=1
    image: registry.pharos.sh/kontenapharos/akrobateo-lb:latest
    imagePullPolicy: Always
    name: sysctl
    resources: {}
    securityContext:
      privileged: true
    terminationMessagePath: /dev/termination-log
apiVersion: v1
kind: Pod
metadata:
  annotations:
    cni.projectcalico.org/podIP: 10.42.2.18/32
  creationTimestamp: "2019-09-12T14:13:28Z"
  generateName: akrobateo-tomcat-test-
  labels:
    akrobateo.kontena.io/svcname: tomcat-test
    app: akrobateo-tomcat-test
    controller-revision-hash: 6fb4d87bb6
    pod-template-generation: "1"
  name: akrobateo-tomcat-test-42wkq
  namespace: default
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: DaemonSet
    name: akrobateo-tomcat-test
    uid: 7e193faa-d567-11e9-b711-5254007875e7
  resourceVersion: "157713"
  selfLink: /api/v1/namespaces/default/pods/akrobateo-tomcat-test-42wkq
  uid: 7e26d155-d567-11e9-b711-5254007875e7
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchFields:
          - key: metadata.name
            operator: In
            values:
            - 172.20.1.156
  containers:
  - env:
    - name: SRC_PORT
      value: "80"
    - name: DEST_PROTO
      value: TCP
    - name: DEST_PORT
      value: "80"
    - name: DEST_IP
      value: 10.43.177.188
    image: registry.pharos.sh/kontenapharos/akrobateo-lb:latest
    imagePullPolicy: IfNotPresent
    name: http
    ports:
    - containerPort: 80
      hostPort: 80
      name: http
      protocol: TCP
    resources: {}
    securityContext:
      capabilities:
        add:
        - NET_ADMIN
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-tjrs9
      readOnly: true
  dnsPolicy: ClusterFirst
  enableServiceLinks: true
  initContainers:
  - command:
    - sh
    - -c
    - sysctl -w net.ipv4.ip_forward=1
    image: registry.pharos.sh/kontenapharos/akrobateo-lb:latest
    imagePullPolicy: Always
    name: sysctl
    resources: {}
    securityContext:
      privileged: true
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-tjrs9
      readOnly: true
  nodeName: 172.20.1.156
  priority: 0
  restartPolicy: Always
  schedulerName: default-scheduler
  securityContext: {}
  serviceAccount: default
  serviceAccountName: default
  terminationGracePeriodSeconds: 30
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
``

@dhoard
Copy link
Author

dhoard commented Sep 12, 2019

Here is my test Deployment / Service yaml

---
apiVersion: v1
kind: Service
metadata:
  name: tomcat-test
  labels:
    app: tomcat
spec:
  type: LoadBalancer
  ports:
    - port: 80
      targetPort: 8080
      protocol: TCP
      name: http
  selector:
    app: tomcat
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
  name: tomcat-test
  labels:
    app: tomcat
spec:
  replicas: 1
  selector:
    matchLabels:
      app: tomcat
      release: test
  template:
    metadata:
      labels:
        app: tomcat
        release: test
    spec:
      volumes:
        - name: app-volume
          emptyDir: {}
      initContainers:
        - name: war
          image: ananwaresystems/webarchive:1.0
          imagePullPolicy: IfNotPresent
          command:
            - "sh"
            - "-c"
            - "cp /*.war /app"
          volumeMounts:
            - name: app-volume
              mountPath: /app
      containers:
        - name: tomcat
          image: tomcat:7.0
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: app-volume
              mountPath: /usr/local/tomcat/webapps
          ports:
            - containerPort: 8080
              hostPort: 8009
          livenessProbe:
            httpGet:
              path: /sample
              port: 8080
            initialDelaySeconds: 60
            periodSeconds: 30
          readinessProbe:
            httpGet:
              path: /sample
              port: 8080
            initialDelaySeconds: 60
            periodSeconds: 30
            failureThreshold: 6
          resources:
            {}

As a data point, this works with metallb.

@jnummelin
Copy link
Contributor

jnummelin commented Sep 18, 2019

Well, MetalLB does not use custom pods with iptables for routing so it works completely differently alltogether. :)

So the LB pods do look like expected with the init containers trying to set net.ipv4.ip_forward. Would you be able to run a pure test pod on the system and try out couple of things. So to test out how ip forwarding looks like, run a simple test pod that mimics the LB pod:

apiVersion: v1
kind: Pod
metadata:
  name: ip-forward-test
  labels:
    name: ip-forward-test
spec:
  containers:
    - name: test
      command:
        - sh
        - -c
        - sleep 60000
      image: registry.pharos.sh/kontenapharos/akrobateo-lb:latest
      imagePullPolicy: Always
      name: sysctl
      resources: {}
      securityContext:
        privileged: true

Then exec yourself into the pod with kubectl exec -t -i ip-forward-test sh and run some tests:

/ $ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
/ $ sysctl -w net.ipv4.ip_forward=1
/ $ echo $?
0
/ $ sysctl net.ipv4.ip_forward
net.ipv4.ip_forward = 1
/ $ cat /proc/sys/net/ipv4/ip_forward
1

@dhoard
Copy link
Author

dhoard commented Nov 5, 2019

Working correctly with a fresh install of Kubernetes v1.15.5.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants