Is there any reference yaml file applying multus-cni for pod network? #3

Closed
dougbtv opened this Issue Feb 21, 2017 · 30 comments

Comments

Projects
None yet
7 participants
@dougbtv
Member

dougbtv commented Feb 21, 2017

I'm able to compile multus-cni, and I'd like to run it for a test as a pod network in Kubernetes, but, I'm having trouble figuring out a specification for exactly how to implement it. I've noticed the multus-cni readme has an example config, but, not an example yaml file for applying the pod network, e.g. so I can run something like:

kubectl apply -f multus.yaml

So I tried to create one using the example one from Flannel as an example, I've tried this one I created that I've posted as a gist.

However, I'm not having a lot of luck.

Generally the steps I've taken are to:

  • Compile multus-cni on master and minion nodes and copy binaries into /opt/cni/bin
  • kubectl apply -f multus.yaml with this yaml
  • Join the minion
  • Create a pod (just an nginx for example)

As a note, I am testing on virtual machines as kubernetes hosts, so I don't have SRIOV capability. I'm continuing to iterate to try to figure it out, but, would appreciate any input here.

Any pointers to help me get started there? Thank you.

@rkamudhan

This comment has been minimized.

Show comment
Hide comment
@rkamudhan

rkamudhan Feb 21, 2017

Contributor

Thanks for reporting the concern on the example conf file. The "masterplugin" is the only argument belongs to the Mutlus cni plugin. The "if0","createmac" and "if0name" are the part of the sriov plugin to support data plane. It is not a part of the multus cni plugins. I have rewrite your net conf file here(refer ipvlan conf document). Please report me, if you come across any issue. It helps us to identifies the bugs.

 {
      "name": "multus-demo-network",
      "type": "multus",
      "delegates": [
          {
            "type": "ipvlan",
            "master": "eth0",
            "ipam": {
              "type": "host-local",
              "subnet": "10.244.10.0/24",
              "rangeStart": "10.244.10.131",
              "rangeEnd": "10.244.10.190",
              "routes": [
                { "dst": "0.0.0.0/0" }
              ],
              "gateway": "10.244.10.1"
            }
          },
          {
            "type": "ipvlan",
            "master": "eth1",
            "ipam": {
              "type": "host-local",
              "subnet": "10.244.10.0/24",
              "rangeStart": "10.244.10.100",
              "rangeEnd": "10.244.10.130",
              "routes": [
                { "dst": "0.0.0.0/0" }
              ],
              "gateway": "10.244.10.1"
            }
          },
          {
            "type": "flannel",
            "masterplugin": true,
            "delegate": {
              "isDefaultGateway": true
            }
          }
      ]
    }
Contributor

rkamudhan commented Feb 21, 2017

Thanks for reporting the concern on the example conf file. The "masterplugin" is the only argument belongs to the Mutlus cni plugin. The "if0","createmac" and "if0name" are the part of the sriov plugin to support data plane. It is not a part of the multus cni plugins. I have rewrite your net conf file here(refer ipvlan conf document). Please report me, if you come across any issue. It helps us to identifies the bugs.

 {
      "name": "multus-demo-network",
      "type": "multus",
      "delegates": [
          {
            "type": "ipvlan",
            "master": "eth0",
            "ipam": {
              "type": "host-local",
              "subnet": "10.244.10.0/24",
              "rangeStart": "10.244.10.131",
              "rangeEnd": "10.244.10.190",
              "routes": [
                { "dst": "0.0.0.0/0" }
              ],
              "gateway": "10.244.10.1"
            }
          },
          {
            "type": "ipvlan",
            "master": "eth1",
            "ipam": {
              "type": "host-local",
              "subnet": "10.244.10.0/24",
              "rangeStart": "10.244.10.100",
              "rangeEnd": "10.244.10.130",
              "routes": [
                { "dst": "0.0.0.0/0" }
              ],
              "gateway": "10.244.10.1"
            }
          },
          {
            "type": "flannel",
            "masterplugin": true,
            "delegate": {
              "isDefaultGateway": true
            }
          }
      ]
    }
@dougbtv

This comment has been minimized.

Show comment
Hide comment
@dougbtv

dougbtv Feb 21, 2017

Member

Thanks for the help Kuralamudhan! I appreciate it.

Alright, I feel like I'm getting a little closer.

So here's on a fresh install of k8s 1.5

If I only use the config as you have provided, when I kubectl describe pod <POD-NAME> I get an error which looks like: open /run/flannel/subnet.env: no such file or directory; Skipping pod.

So I create this multus config, then I also add add Flannel. But, when I run a pod I only see the flannel network when I run ip addr:

[centos@kube-master ~]$ kubectl exec nginx-klxcc -it ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:f4:01:03 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.3/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::bc70:68ff:fef3:632f/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever

Is it possible I'm missing a step here? Thanks again!


Here's the detail of the steps that I took. Note that I also put the same /etc/cni/net.d/10-multus.conf on the minion node.

Also, for what it's worth, I am using kubeadm init on the master and then kubeadm join --token=s0m3.t0k3n 192.168.122.120 on the minion to join it to the cluster.

Phase 1: Only multus config

[centos@kube-master ~]$ ls /etc/cni/net.d
ls: cannot access /etc/cni/net.d: No such file or directory
[centos@kube-master ~]$ sudo mkdir -p /etc/cni/net.d/
[centos@kube-master ~]$ sudo vi /etc/cni/net.d/10-multus.conf
[centos@kube-master ~]$ cat /etc/cni/net.d/10-multus.conf
{
      "name": "multus-demo-network",
      "type": "multus",
      "delegates": [
          {
            "type": "ipvlan",
            "ipam": {
              "type": "host-local",
              "subnet": "10.244.10.0/24",
              "rangeStart": "10.244.10.131",
              "rangeEnd": "10.244.10.190",
              "routes": [
                { "dst": "0.0.0.0/0" }
              ],
              "gateway": "10.244.10.1"
            }
          },
          {
            "type": "ipvlan",
            "ipam": {
              "type": "host-local",
              "subnet": "10.244.10.0/24",
              "rangeStart": "10.244.10.100",
              "rangeEnd": "10.244.10.130",
              "routes": [
                { "dst": "0.0.0.0/0" }
              ],
              "gateway": "10.244.10.1"
            }
          },
          {
            "type": "flannel",
            "masterplugin": true,
            "delegate": {
              "isDefaultGateway": true
            }
          }
      ]
    }
[centos@kube-master ~]$ vi nginx_pod.yaml
[centos@kube-master ~]$ cat nginx_pod.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

[centos@kube-master ~]$ kubectl get nodes
NAME            STATUS         AGE
kube-master     Ready,master   10m
kube-minion-1   Ready          2s

[centos@kube-master ~]$ kubectl create -f nginx_pod.yaml 
replicationcontroller "nginx" created
[centos@kube-master ~]$ watch -n1 kubectl get pods
[centos@kube-master ~]$ watch -n1 kubectl describe pod nginx-0rxpc
[centos@kube-master ~]$ kubectl describe pod nginx-0rxpc
[... snip ...]
  FirstSeen LastSeen  Count From      SubObjectPath Type    Reason    Message
  --------- --------  ----- ----      ------------- --------  ------    -------
  27s   27s   1 {default-scheduler }      Normal    Scheduled Successfully assigned nginx-0rxpc to kube-minion-1
  18s   2s    4 {kubelet kube-minion-1}     Warning   FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "nginx-0rxpc_default" with SetupNetworkError: "Failed to setup network for pod \"nginx-0rxpc_default(590d66f9-f866-11e6-b3b4-5254002792ba)\" using network plugins \"cni\": Multus: error in invoke Delegate add - \"flannel\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"

Phase 2: Add in flannel as well

However, at the end you'll see there's only one interface.

[centos@kube-master ~]$ kubectl delete -f nginx_pod.yaml 
replicationcontroller "nginx" deleted
[centos@kube-master ~]$ curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml > kube-flannel.yml
[centos@kube-master ~]$ kubectl apply -f kube-flannel.yml 
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

[centos@kube-master ~]$ kubectl get pods --all-namespaces | grep flannel
default       kube-flannel-ds-2h2rw                 2/2       Running   0          1m
default       kube-flannel-ds-pcrm6                 2/2       Running   0          1m
[centos@kube-master ~]$ 
[centos@kube-master ~]$ 
[centos@kube-master ~]$ kubectl create -f nginx_pod.yaml 
replicationcontroller "nginx" created

[centos@kube-master ~]$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
kube-flannel-ds-2h2rw   2/2       Running   0          2m
kube-flannel-ds-pcrm6   2/2       Running   0          2m
nginx-klxcc             1/1       Running   0          51s
nginx-qgxpq             1/1       Running   0          51s

[centos@kube-master ~]$ kubectl exec nginx-klxcc -it ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:f4:01:03 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.3/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::bc70:68ff:fef3:632f/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever

Member

dougbtv commented Feb 21, 2017

Thanks for the help Kuralamudhan! I appreciate it.

Alright, I feel like I'm getting a little closer.

So here's on a fresh install of k8s 1.5

If I only use the config as you have provided, when I kubectl describe pod <POD-NAME> I get an error which looks like: open /run/flannel/subnet.env: no such file or directory; Skipping pod.

So I create this multus config, then I also add add Flannel. But, when I run a pod I only see the flannel network when I run ip addr:

[centos@kube-master ~]$ kubectl exec nginx-klxcc -it ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:f4:01:03 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.3/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::bc70:68ff:fef3:632f/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever

Is it possible I'm missing a step here? Thanks again!


Here's the detail of the steps that I took. Note that I also put the same /etc/cni/net.d/10-multus.conf on the minion node.

Also, for what it's worth, I am using kubeadm init on the master and then kubeadm join --token=s0m3.t0k3n 192.168.122.120 on the minion to join it to the cluster.

Phase 1: Only multus config

[centos@kube-master ~]$ ls /etc/cni/net.d
ls: cannot access /etc/cni/net.d: No such file or directory
[centos@kube-master ~]$ sudo mkdir -p /etc/cni/net.d/
[centos@kube-master ~]$ sudo vi /etc/cni/net.d/10-multus.conf
[centos@kube-master ~]$ cat /etc/cni/net.d/10-multus.conf
{
      "name": "multus-demo-network",
      "type": "multus",
      "delegates": [
          {
            "type": "ipvlan",
            "ipam": {
              "type": "host-local",
              "subnet": "10.244.10.0/24",
              "rangeStart": "10.244.10.131",
              "rangeEnd": "10.244.10.190",
              "routes": [
                { "dst": "0.0.0.0/0" }
              ],
              "gateway": "10.244.10.1"
            }
          },
          {
            "type": "ipvlan",
            "ipam": {
              "type": "host-local",
              "subnet": "10.244.10.0/24",
              "rangeStart": "10.244.10.100",
              "rangeEnd": "10.244.10.130",
              "routes": [
                { "dst": "0.0.0.0/0" }
              ],
              "gateway": "10.244.10.1"
            }
          },
          {
            "type": "flannel",
            "masterplugin": true,
            "delegate": {
              "isDefaultGateway": true
            }
          }
      ]
    }
[centos@kube-master ~]$ vi nginx_pod.yaml
[centos@kube-master ~]$ cat nginx_pod.yaml 
apiVersion: v1
kind: ReplicationController
metadata:
  name: nginx
spec:
  replicas: 2
  selector:
    app: nginx
  template:
    metadata:
      name: nginx
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

[centos@kube-master ~]$ kubectl get nodes
NAME            STATUS         AGE
kube-master     Ready,master   10m
kube-minion-1   Ready          2s

[centos@kube-master ~]$ kubectl create -f nginx_pod.yaml 
replicationcontroller "nginx" created
[centos@kube-master ~]$ watch -n1 kubectl get pods
[centos@kube-master ~]$ watch -n1 kubectl describe pod nginx-0rxpc
[centos@kube-master ~]$ kubectl describe pod nginx-0rxpc
[... snip ...]
  FirstSeen LastSeen  Count From      SubObjectPath Type    Reason    Message
  --------- --------  ----- ----      ------------- --------  ------    -------
  27s   27s   1 {default-scheduler }      Normal    Scheduled Successfully assigned nginx-0rxpc to kube-minion-1
  18s   2s    4 {kubelet kube-minion-1}     Warning   FailedSync  Error syncing pod, skipping: failed to "SetupNetwork" for "nginx-0rxpc_default" with SetupNetworkError: "Failed to setup network for pod \"nginx-0rxpc_default(590d66f9-f866-11e6-b3b4-5254002792ba)\" using network plugins \"cni\": Multus: error in invoke Delegate add - \"flannel\": open /run/flannel/subnet.env: no such file or directory; Skipping pod"

Phase 2: Add in flannel as well

However, at the end you'll see there's only one interface.

[centos@kube-master ~]$ kubectl delete -f nginx_pod.yaml 
replicationcontroller "nginx" deleted
[centos@kube-master ~]$ curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml > kube-flannel.yml
[centos@kube-master ~]$ kubectl apply -f kube-flannel.yml 
serviceaccount "flannel" created
configmap "kube-flannel-cfg" created
daemonset "kube-flannel-ds" created

[centos@kube-master ~]$ kubectl get pods --all-namespaces | grep flannel
default       kube-flannel-ds-2h2rw                 2/2       Running   0          1m
default       kube-flannel-ds-pcrm6                 2/2       Running   0          1m
[centos@kube-master ~]$ 
[centos@kube-master ~]$ 
[centos@kube-master ~]$ kubectl create -f nginx_pod.yaml 
replicationcontroller "nginx" created

[centos@kube-master ~]$ kubectl get pods
NAME                    READY     STATUS    RESTARTS   AGE
kube-flannel-ds-2h2rw   2/2       Running   0          2m
kube-flannel-ds-pcrm6   2/2       Running   0          2m
nginx-klxcc             1/1       Running   0          51s
nginx-qgxpq             1/1       Running   0          51s

[centos@kube-master ~]$ kubectl exec nginx-klxcc -it ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:f4:01:03 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.3/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::bc70:68ff:fef3:632f/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever

@dougbtv

This comment has been minimized.

Show comment
Hide comment
@dougbtv

dougbtv Feb 21, 2017

Member

Hold the presses, I think I got it to work. So, I used another config and I combined that with my based-on-flannel yaml, and I added that one... And I'm seeing multiple interfaces when I run ip addr :)

...Neat project with multus-cni! I stand impressed.

For the record: Here's the yaml I used...

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: multus
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-multus-cfg
  namespace: kube-system
  labels:
    tier: node
    app: multus
data:
  cni-conf.json: |
    {
      "name": "multus-demo",
      "type": "multus",
      "delegates": [
        {
          "type": "macvlan",
          "master": "eth0",
          "mode": "bridge",
          "ipam": {
            "type": "host-local",
            "subnet": "192.168.122.0/24",
            "rangeStart": "192.168.122.200",
            "rangeEnd": "192.168.122.216",
            "routes": [
              { "dst": "0.0.0.0/0" }
            ],
            "gateway": "192.168.122.1"
         }
        },
        {
          "type": "flannel",
          "masterplugin": true,
          "delegate": {
            "isDefaultGateway": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-multus-ds
  namespace: kube-system
  labels:
    tier: node
    app: multus
spec:
  template:
    metadata:
      labels:
        tier: node
        app: multus
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      serviceAccountName: multus
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.7.0-amd64
        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      - name: install-cni
        image: quay.io/coreos/flannel:v0.7.0-amd64
        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-multus.conf; while true; do sleep 3600; done" ]
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-multus-cfg

And then I created from that, spun up a pod, and then with ip addr I can see multiple interfaces :)

[centos@kube-master ~]$ kubectl delete -f kube-flannel.yml 
[centos@kube-master ~]$ sudo rm -f /etc/cni/net.d/*
[centos@kube-master ~]$ vi multus.yaml
[centos@kube-master ~]$ kubectl apply -f multus.yaml 
serviceaccount "multus" created
configmap "kube-multus-cfg" created
daemonset "kube-multus-ds" created
[centos@kube-master ~]$ watch -n0 kubectl get pods --all-namespaces
[centos@kube-master ~]$ ls /etc/cni/net.d
10-multus.conf
[centos@kube-master ~]$ cat /etc/cni/net.d/10-multus.conf 
{
  "name": "multus-demo",
  "type": "multus",
  "delegates": [
    {
      "type": "macvlan",
      "master": "eth0",
      "mode": "bridge",
      "ipam": {
        "type": "host-local",
        "subnet": "192.168.122.0/24",
        "rangeStart": "192.168.122.200",
        "rangeEnd": "192.168.122.216",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": "192.168.122.1"
     }
    },
    {
      "type": "flannel",
      "masterplugin": true,
      "delegate": {
        "isDefaultGateway": true
      }
    }
  ]
}
[centos@kube-master ~]$ kubectl create -f nginx_pod.yaml 
replicationcontroller "nginx" created
[centos@kube-master ~]$ watch -n0 kubectl get pods --all-namespaces
[centos@kube-master ~]$ kubectl exec nginx-4hv52 -it ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:f4:01:02 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::1c98:1aff:fe4b:93ed/64 scope link 
       valid_lft forever preferred_lft forever
4: net0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 0a:58:c0:a8:7a:c8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.200/24 scope global net0
       valid_lft forever preferred_lft forever
    inet6 fe80::858:c0ff:fea8:7ac8/64 scope link 
       valid_lft forever preferred_lft forever

Member

dougbtv commented Feb 21, 2017

Hold the presses, I think I got it to work. So, I used another config and I combined that with my based-on-flannel yaml, and I added that one... And I'm seeing multiple interfaces when I run ip addr :)

...Neat project with multus-cni! I stand impressed.

For the record: Here's the yaml I used...

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: multus
  namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: kube-multus-cfg
  namespace: kube-system
  labels:
    tier: node
    app: multus
data:
  cni-conf.json: |
    {
      "name": "multus-demo",
      "type": "multus",
      "delegates": [
        {
          "type": "macvlan",
          "master": "eth0",
          "mode": "bridge",
          "ipam": {
            "type": "host-local",
            "subnet": "192.168.122.0/24",
            "rangeStart": "192.168.122.200",
            "rangeEnd": "192.168.122.216",
            "routes": [
              { "dst": "0.0.0.0/0" }
            ],
            "gateway": "192.168.122.1"
         }
        },
        {
          "type": "flannel",
          "masterplugin": true,
          "delegate": {
            "isDefaultGateway": true
          }
        }
      ]
    }
  net-conf.json: |
    {
      "Network": "10.244.0.0/16",
      "Backend": {
        "Type": "vxlan"
      }
    }
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
  name: kube-multus-ds
  namespace: kube-system
  labels:
    tier: node
    app: multus
spec:
  template:
    metadata:
      labels:
        tier: node
        app: multus
    spec:
      hostNetwork: true
      nodeSelector:
        beta.kubernetes.io/arch: amd64
      serviceAccountName: multus
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.7.0-amd64
        command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]
        securityContext:
          privileged: true
        env:
        - name: POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        volumeMounts:
        - name: run
          mountPath: /run
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      - name: install-cni
        image: quay.io/coreos/flannel:v0.7.0-amd64
        command: [ "/bin/sh", "-c", "set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-multus.conf; while true; do sleep 3600; done" ]
        volumeMounts:
        - name: cni
          mountPath: /etc/cni/net.d
        - name: flannel-cfg
          mountPath: /etc/kube-flannel/
      volumes:
        - name: run
          hostPath:
            path: /run
        - name: cni
          hostPath:
            path: /etc/cni/net.d
        - name: flannel-cfg
          configMap:
            name: kube-multus-cfg

And then I created from that, spun up a pod, and then with ip addr I can see multiple interfaces :)

[centos@kube-master ~]$ kubectl delete -f kube-flannel.yml 
[centos@kube-master ~]$ sudo rm -f /etc/cni/net.d/*
[centos@kube-master ~]$ vi multus.yaml
[centos@kube-master ~]$ kubectl apply -f multus.yaml 
serviceaccount "multus" created
configmap "kube-multus-cfg" created
daemonset "kube-multus-ds" created
[centos@kube-master ~]$ watch -n0 kubectl get pods --all-namespaces
[centos@kube-master ~]$ ls /etc/cni/net.d
10-multus.conf
[centos@kube-master ~]$ cat /etc/cni/net.d/10-multus.conf 
{
  "name": "multus-demo",
  "type": "multus",
  "delegates": [
    {
      "type": "macvlan",
      "master": "eth0",
      "mode": "bridge",
      "ipam": {
        "type": "host-local",
        "subnet": "192.168.122.0/24",
        "rangeStart": "192.168.122.200",
        "rangeEnd": "192.168.122.216",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": "192.168.122.1"
     }
    },
    {
      "type": "flannel",
      "masterplugin": true,
      "delegate": {
        "isDefaultGateway": true
      }
    }
  ]
}
[centos@kube-master ~]$ kubectl create -f nginx_pod.yaml 
replicationcontroller "nginx" created
[centos@kube-master ~]$ watch -n0 kubectl get pods --all-namespaces
[centos@kube-master ~]$ kubectl exec nginx-4hv52 -it ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:f4:01:02 brd ff:ff:ff:ff:ff:ff
    inet 10.244.1.2/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::1c98:1aff:fe4b:93ed/64 scope link 
       valid_lft forever preferred_lft forever
4: net0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default 
    link/ether 0a:58:c0:a8:7a:c8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.122.200/24 scope global net0
       valid_lft forever preferred_lft forever
    inet6 fe80::858:c0ff:fea8:7ac8/64 scope link 
       valid_lft forever preferred_lft forever

@rkamudhan

This comment has been minimized.

Show comment
Hide comment
@rkamudhan

rkamudhan Feb 21, 2017

Contributor

Kudos @dougbtv. Please post us here, if you face any issue with multus cni.

Contributor

rkamudhan commented Feb 21, 2017

Kudos @dougbtv. Please post us here, if you face any issue with multus cni.

@rkamudhan

This comment has been minimized.

Show comment
Hide comment
@rkamudhan

rkamudhan Feb 22, 2017

Contributor

@dougbtv We created a pull request in cni community containernetworking/cni#379 to add mutlus CNI as the 3rd party plugin. As a Multus CNI user welcome your feed back there.

Contributor

rkamudhan commented Feb 22, 2017

@dougbtv We created a pull request in cni community containernetworking/cni#379 to add mutlus CNI as the 3rd party plugin. As a Multus CNI user welcome your feed back there.

@rkamudhan rkamudhan closed this Feb 22, 2017

@dougbtv

This comment has been minimized.

Show comment
Hide comment
@dougbtv

dougbtv Feb 22, 2017

Member

Thanks again Kuralamudhan,

I went ahead and left a +1 for you on the PR! I'd like to see it listed there too.

Additionally, I thought I'd share with you (and for others, too) this blog article I wrote about getting multus-cni up and running: http://dougbtv.com/nfvpe/2017/02/22/multus-cni/

Member

dougbtv commented Feb 22, 2017

Thanks again Kuralamudhan,

I went ahead and left a +1 for you on the PR! I'd like to see it listed there too.

Additionally, I thought I'd share with you (and for others, too) this blog article I wrote about getting multus-cni up and running: http://dougbtv.com/nfvpe/2017/02/22/multus-cni/

@eugene-chow

This comment has been minimized.

Show comment
Hide comment
@eugene-chow

eugene-chow Apr 7, 2017

Hey @dougbtv, thanks for the tutorial. Big thank you to the Intel team for making this possible. I have a related question.

Is there a way to selectively make a pod multi-homed? What I understand from your tutorial is that the config you shared will mount an interface on every pod.

Using the hostNetwork directive, I want a pod to bind to the host's network. On top of that, I want it to bind to flannel. Only this pod shall bind, while the rest will reside on flannel. Are you aware if this is possible?

Thanks!

Hey @dougbtv, thanks for the tutorial. Big thank you to the Intel team for making this possible. I have a related question.

Is there a way to selectively make a pod multi-homed? What I understand from your tutorial is that the config you shared will mount an interface on every pod.

Using the hostNetwork directive, I want a pod to bind to the host's network. On top of that, I want it to bind to flannel. Only this pod shall bind, while the rest will reside on flannel. Are you aware if this is possible?

Thanks!

@rkamudhan

This comment has been minimized.

Show comment
Hide comment
@rkamudhan

rkamudhan Apr 7, 2017

Contributor

Hi @eugene-chow is that you, you need one pod with "hostnetwork" and rest of the pod without "hostnetwork" ?

Contributor

rkamudhan commented Apr 7, 2017

Hi @eugene-chow is that you, you need one pod with "hostnetwork" and rest of the pod without "hostnetwork" ?

@rkamudhan rkamudhan reopened this Apr 7, 2017

@eugene-chow

This comment has been minimized.

Show comment
Hide comment
@eugene-chow

eugene-chow Apr 7, 2017

Hi @rkamudhan thanks for helping. Yes, you're right. Just 1 pod needs the hostNetwork in addition to the pod network (ie. flannel).

eugene-chow commented Apr 7, 2017

Hi @rkamudhan thanks for helping. Yes, you're right. Just 1 pod needs the hostNetwork in addition to the pod network (ie. flannel).

@rkamudhan

This comment has been minimized.

Show comment
Hide comment
@rkamudhan

rkamudhan Apr 7, 2017

Contributor

Hi @eugene-chow, you can achieve this in pod yaml life itself. Just add hostnetwork in the pod yaml file.

spec:
    hostNetwork: true
Contributor

rkamudhan commented Apr 7, 2017

Hi @eugene-chow, you can achieve this in pod yaml life itself. Just add hostnetwork in the pod yaml file.

spec:
    hostNetwork: true
@eugene-chow

This comment has been minimized.

Show comment
Hide comment
@eugene-chow

eugene-chow Apr 7, 2017

I have done so but I also want to attach a flannel interface to the pod as well. That means this particular pod has 2 interfaces - hostNetwork and pod network. Isn't this what Multus tries to solve?

I have done so but I also want to attach a flannel interface to the pod as well. That means this particular pod has 2 interfaces - hostNetwork and pod network. Isn't this what Multus tries to solve?

@rkamudhan

This comment has been minimized.

Show comment
Hide comment
@rkamudhan

rkamudhan Apr 7, 2017

Contributor

Create a pod yaml with and without HostNetwork:true, and follows the CNI configuration

spec:
    hostNetwork: true
# tee /etc/cni/net.d/multus-cni.conf <<-'EOF'
{
    "name": "multus-demo-network",
    "type": "multus",
    "delegates": [
        {
                "type": "flannel",
                "masterplugin": true,
                "delegate": {
                        "isDefaultGateway": true
                }
        }
    ]
}
EOF

please follow this http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/ link for more information.

Contributor

rkamudhan commented Apr 7, 2017

Create a pod yaml with and without HostNetwork:true, and follows the CNI configuration

spec:
    hostNetwork: true
# tee /etc/cni/net.d/multus-cni.conf <<-'EOF'
{
    "name": "multus-demo-network",
    "type": "multus",
    "delegates": [
        {
                "type": "flannel",
                "masterplugin": true,
                "delegate": {
                        "isDefaultGateway": true
                }
        }
    ]
}
EOF

please follow this http://alesnosek.com/blog/2017/02/14/accessing-kubernetes-pods-from-outside-of-the-cluster/ link for more information.

@aaratn

This comment has been minimized.

Show comment
Hide comment
@aaratn

aaratn Apr 7, 2017

I am trying to experiment same as @eugene-chow however I am on core os and it is strict on filesystem modification. Is there a documentation I can follow to compile and setup multus cni on core os?

aaratn commented Apr 7, 2017

I am trying to experiment same as @eugene-chow however I am on core os and it is strict on filesystem modification. Is there a documentation I can follow to compile and setup multus cni on core os?

@eugene-chow

This comment has been minimized.

Show comment
Hide comment
@eugene-chow

eugene-chow Apr 10, 2017

@rkamudhan Thanks for the tip! I'll give it a shot.

@aaratn You can install stuff on CoreOS using this method. Here's the official docs. Using CoreOS's toolbox, you can install the binaries needed to perform compilation and then copy it to /opt.

@rkamudhan Thanks for the tip! I'll give it a shot.

@aaratn You can install stuff on CoreOS using this method. Here's the official docs. Using CoreOS's toolbox, you can install the binaries needed to perform compilation and then copy it to /opt.

@aaratn

This comment has been minimized.

Show comment
Hide comment
@aaratn

aaratn Apr 10, 2017

@eugene-chow I figured that out, however I am using hyperkube. Now I am setting up injecting multus to hyperkube image !

aaratn commented Apr 10, 2017

@eugene-chow I figured that out, however I am using hyperkube. Now I am setting up injecting multus to hyperkube image !

@aaratn

This comment has been minimized.

Show comment
Hide comment
@aaratn

aaratn Apr 11, 2017

I managed to get multus up and running !! However I created two pods 1. Wordpress 2. nginx.

Goal: to have wordpress-pod with single ip address and nginx with both ip addresses.

I tried to use hostNetwork: true on wordpress pod however it binded the pod to the host's network instead of assigning its own ip address.

Eg below

POD wordpress with hostNetwork: true

POD Spec:

apiVersion:v1
kind: ReplicationController
metadata:
  name: wordpress
spec:
  replicas: 1
  selector:
    app: wordpress
  template:
    metadata:
      name: wordpress
      labels:
        app: wordpress
    spec:
      hostNetwork: true
      containers:
      - name: wordpress
        image: wordpress
        ports:
        - containerPort: 80

Output of command: kubectl exec wordpress-jrcwz -it ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:30:0f:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.36/24 brd 192.168.1.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe30:f8e/64 scope link 
       valid_lft forever preferred_lft forever
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet 10.1.35.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::fb21:70f2:ef2c:2728/64 scope link flags 800 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:6c:7d:da:5f brd ff:ff:ff:ff:ff:ff
    inet 10.1.35.1/24 scope global docker0
       valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:58:0a:01:23:01 brd ff:ff:ff:ff:ff:ff
    inet 10.1.35.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::447b:80ff:fe2d:fbad/64 scope link 
       valid_lft forever preferred_lft forever
6: veth5a1d43b4@flannel0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue master cni0 state UP group default 
    link/ether 0a:fa:52:cd:c3:ef brd ff:ff:ff:ff:ff:ff
    inet6 fe80::8fa:52ff:fecd:c3ef/64 scope link 
       valid_lft forever preferred_lft forever

The pod doesn't have its own ip address when we use hostNetwork: true

POD wordpress with hostNetwork: false

POD Spec:

apiVersion: v1
kind: ReplicationController
metadata:
  name: wordpress
spec:
  replicas: 1
  selector:
    app: wordpress
  template:
    metadata:
      name: wordpress
      labels:
        app: wordpress
    spec:
      hostNetwork: false
      containers:
      - name: wordpress
        image: wordpress
        ports:
        - containerPort: 80

Output of command: kubectl exec -it wordpress-p6d07 ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:01:23:03 brd ff:ff:ff:ff:ff:ff
    inet 10.1.35.3/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5c9a:88ff:fe0e:3d0b/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
4: net0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 0a:58:ac:10:01:c9 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.201/24 scope global net0
       valid_lft forever preferred_lft forever
    inet6 fe80::858:acff:fe10:1c9/64 scope link 
       valid_lft forever preferred_lft forever

How can we have a pod with any one ip from above output ?

aaratn commented Apr 11, 2017

I managed to get multus up and running !! However I created two pods 1. Wordpress 2. nginx.

Goal: to have wordpress-pod with single ip address and nginx with both ip addresses.

I tried to use hostNetwork: true on wordpress pod however it binded the pod to the host's network instead of assigning its own ip address.

Eg below

POD wordpress with hostNetwork: true

POD Spec:

apiVersion:v1
kind: ReplicationController
metadata:
  name: wordpress
spec:
  replicas: 1
  selector:
    app: wordpress
  template:
    metadata:
      name: wordpress
      labels:
        app: wordpress
    spec:
      hostNetwork: true
      containers:
      - name: wordpress
        image: wordpress
        ports:
        - containerPort: 80

Output of command: kubectl exec wordpress-jrcwz -it ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0c:29:30:0f:8e brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.36/24 brd 192.168.1.255 scope global ens192
       valid_lft forever preferred_lft forever
    inet6 fe80::20c:29ff:fe30:f8e/64 scope link 
       valid_lft forever preferred_lft forever
3: flannel0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1472 qdisc fq_codel state UNKNOWN group default qlen 500
    link/none 
    inet 10.1.35.0/16 scope global flannel0
       valid_lft forever preferred_lft forever
    inet6 fe80::fb21:70f2:ef2c:2728/64 scope link flags 800 
       valid_lft forever preferred_lft forever
4: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 
    link/ether 02:42:6c:7d:da:5f brd ff:ff:ff:ff:ff:ff
    inet 10.1.35.1/24 scope global docker0
       valid_lft forever preferred_lft forever
5: cni0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default qlen 1000
    link/ether 0a:58:0a:01:23:01 brd ff:ff:ff:ff:ff:ff
    inet 10.1.35.1/24 scope global cni0
       valid_lft forever preferred_lft forever
    inet6 fe80::447b:80ff:fe2d:fbad/64 scope link 
       valid_lft forever preferred_lft forever
6: veth5a1d43b4@flannel0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue master cni0 state UP group default 
    link/ether 0a:fa:52:cd:c3:ef brd ff:ff:ff:ff:ff:ff
    inet6 fe80::8fa:52ff:fecd:c3ef/64 scope link 
       valid_lft forever preferred_lft forever

The pod doesn't have its own ip address when we use hostNetwork: true

POD wordpress with hostNetwork: false

POD Spec:

apiVersion: v1
kind: ReplicationController
metadata:
  name: wordpress
spec:
  replicas: 1
  selector:
    app: wordpress
  template:
    metadata:
      name: wordpress
      labels:
        app: wordpress
    spec:
      hostNetwork: false
      containers:
      - name: wordpress
        image: wordpress
        ports:
        - containerPort: 80

Output of command: kubectl exec -it wordpress-p6d07 ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
3: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1472 qdisc noqueue state UP group default 
    link/ether 0a:58:0a:01:23:03 brd ff:ff:ff:ff:ff:ff
    inet 10.1.35.3/24 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::5c9a:88ff:fe0e:3d0b/64 scope link tentative dadfailed 
       valid_lft forever preferred_lft forever
4: net0@if2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 0a:58:ac:10:01:c9 brd ff:ff:ff:ff:ff:ff
    inet 172.16.1.201/24 scope global net0
       valid_lft forever preferred_lft forever
    inet6 fe80::858:acff:fe10:1c9/64 scope link 
       valid_lft forever preferred_lft forever

How can we have a pod with any one ip from above output ?

@eugene-chow

This comment has been minimized.

Show comment
Hide comment
@eugene-chow

eugene-chow Apr 13, 2017

ip addr from hostNetwork: true has ens192 that has a host IP and cni0 which has a pod IP. Isn't that what you want?

ip addr from hostNetwork: false looks odd. It has net0 which seems to have a node IP and eth0 which has a pod IP.

Can you share what's 172.16.1.201/24 and 192.168.1.36/24 so that we can better understand the problem?

Fyi, I haven't yet tried this yet so I can't provide a solution.

ip addr from hostNetwork: true has ens192 that has a host IP and cni0 which has a pod IP. Isn't that what you want?

ip addr from hostNetwork: false looks odd. It has net0 which seems to have a node IP and eth0 which has a pod IP.

Can you share what's 172.16.1.201/24 and 192.168.1.36/24 so that we can better understand the problem?

Fyi, I haven't yet tried this yet so I can't provide a solution.

@aaratn

This comment has been minimized.

Show comment
Hide comment
@aaratn

aaratn Apr 13, 2017

Hi @eugene-chow !

Thanks for reverting back on this,

  • the cni0 is the interface on Host Machine with Host Machine's ip address. I need a virtual ip address on cni0 range with pod's own ip address.

  • 172.16.1.201/24 is container's only network. 192.168.1.36/24 is the host's network.

aaratn commented Apr 13, 2017

Hi @eugene-chow !

Thanks for reverting back on this,

  • the cni0 is the interface on Host Machine with Host Machine's ip address. I need a virtual ip address on cni0 range with pod's own ip address.

  • 172.16.1.201/24 is container's only network. 192.168.1.36/24 is the host's network.

@rkamudhan

This comment has been minimized.

Show comment
Hide comment
@rkamudhan

rkamudhan Apr 16, 2017

Contributor

@aaratn can you share the cni.conf file ?

Contributor

rkamudhan commented Apr 16, 2017

@aaratn can you share the cni.conf file ?

@aaratn

This comment has been minimized.

Show comment
Hide comment
@aaratn

aaratn Apr 16, 2017

@rkamudhan

Here's my cni.conf file

{
  "name": "multus-demo",
  "type": "multus",
  "delegates": [
    {
      "type": "macvlan",
      "master": "ens192",
      "mode": "bridge",
      "ipam": {
        "type": "host-local",
        "subnet": "172.16.1.0/24",
        "rangeStart": "172.16.1.200",
        "rangeEnd": "172.16.1.216",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": "172.16.1.1"
     }
    },
    {
      "type": "flannel",
      "masterplugin": true,
      "delegate": {
        "isDefaultGateway": true
      }
    }
  ]
}

aaratn commented Apr 16, 2017

@rkamudhan

Here's my cni.conf file

{
  "name": "multus-demo",
  "type": "multus",
  "delegates": [
    {
      "type": "macvlan",
      "master": "ens192",
      "mode": "bridge",
      "ipam": {
        "type": "host-local",
        "subnet": "172.16.1.0/24",
        "rangeStart": "172.16.1.200",
        "rangeEnd": "172.16.1.216",
        "routes": [
          { "dst": "0.0.0.0/0" }
        ],
        "gateway": "172.16.1.1"
     }
    },
    {
      "type": "flannel",
      "masterplugin": true,
      "delegate": {
        "isDefaultGateway": true
      }
    }
  ]
}
@pmichali

This comment has been minimized.

Show comment
Hide comment
@pmichali

pmichali Jun 27, 2017

Using K8s 1.6.x and not having success. When I try to create a cluster using flannel, with the config @dougbtv Showed above, I got these messages in log:

Jun 23 15:27:06 kubeadm-1 journal: E0623 15:27:06.407366 1 daemoncontroller.go:233] kube-system/kube-multus-ds failed with : error storing status for daemon set &v1beta1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-multus-ds", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/extensions/v1beta1/namespaces/kube-system/daemonsets/kube-multus-ds", …

Operation cannot be fulfilled on daemonsets.extensions "kube-multus-ds": the object has been modified; please apply your changes to the latest version and try again

Jun 23 15:27:07 kubeadm-1 journal: E0623 15:27:07.521385 1 main.go:127] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-multus-ds-966f9': the server does not allow access to the requested resource (get pods kube-multus-ds-966f9)

Jun 23 15:27:16 kubeadm-1 kubelet: E0623 15:27:16.799075 12573 cni.go:260] Error adding network: Multus: error in invoke Delegate add - "flannel": open /run/flannel/subnet.env: no such file or directory
Jun 23 15:27:16 kubeadm-1 kubelet: E0623 15:27:16.799405 12573 cni.go:211] Error while adding to cni network: Multus: error in invoke Delegate add - "flannel": open /run/flannel/subnet.env: no such file or directory

I was wondering if the issue is with RBAC being on by default in 1.6. For just flannel plugin, I have to apply the kube-flannel-rbac.yaml and then kube-flannel.yaml, to get the plugin operational.

I can't seem to figure out how to do that with this setup and I couldn't figure out how to disable RBAC. Does anyone have it working with K8s 1.6?

Using K8s 1.6.x and not having success. When I try to create a cluster using flannel, with the config @dougbtv Showed above, I got these messages in log:

Jun 23 15:27:06 kubeadm-1 journal: E0623 15:27:06.407366 1 daemoncontroller.go:233] kube-system/kube-multus-ds failed with : error storing status for daemon set &v1beta1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-multus-ds", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/extensions/v1beta1/namespaces/kube-system/daemonsets/kube-multus-ds", …

Operation cannot be fulfilled on daemonsets.extensions "kube-multus-ds": the object has been modified; please apply your changes to the latest version and try again

Jun 23 15:27:07 kubeadm-1 journal: E0623 15:27:07.521385 1 main.go:127] Failed to create SubnetManager: error retrieving pod spec for 'kube-system/kube-multus-ds-966f9': the server does not allow access to the requested resource (get pods kube-multus-ds-966f9)

Jun 23 15:27:16 kubeadm-1 kubelet: E0623 15:27:16.799075 12573 cni.go:260] Error adding network: Multus: error in invoke Delegate add - "flannel": open /run/flannel/subnet.env: no such file or directory
Jun 23 15:27:16 kubeadm-1 kubelet: E0623 15:27:16.799405 12573 cni.go:211] Error while adding to cni network: Multus: error in invoke Delegate add - "flannel": open /run/flannel/subnet.env: no such file or directory

I was wondering if the issue is with RBAC being on by default in 1.6. For just flannel plugin, I have to apply the kube-flannel-rbac.yaml and then kube-flannel.yaml, to get the plugin operational.

I can't seem to figure out how to do that with this setup and I couldn't figure out how to disable RBAC. Does anyone have it working with K8s 1.6?

@dougbtv

This comment has been minimized.

Show comment
Hide comment
@dougbtv

dougbtv Jun 27, 2017

Member

@pmichali -- I didn't read your comment in depth, but, did you see my reference configs on #9 (deals with kube 1.6 goodies)? It might help out.

Member

dougbtv commented Jun 27, 2017

@pmichali -- I didn't read your comment in depth, but, did you see my reference configs on #9 (deals with kube 1.6 goodies)? It might help out.

@pmichali

This comment has been minimized.

Show comment
Hide comment
@pmichali

pmichali Jun 28, 2017

@dougbtv Thanks for the link. I took your configs, changed the interface from eth0 to eth1 (as in a VM and want to use use eth1 for network), IPs from 192.168.122.x to 192.168.2.x to match the network used on eth1, and added "--iface=eth1" to the flanneld line.

After init cluster, I applied the flannel-rbac.yaml and then multus.yaml. Is that right?

This time, the Multus pods came up OK (2/2). However, DNS pod is in ContainerCreating state. Describe output shows (even after deleting and letting it restart):

  1m		1m		1	default-scheduler			Normal		Scheduled	Successfully assigned kube-dns-692378583-bj9cf to kubeadm-1
  1m		58s		2	kubelet, kubeadm-1			Warning		FailedSync	Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-692378583-bj9cf_kube-system(b71ca371-5bfd-11e7-8c73-525400ba512b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-692378583-bj9cf_kube-system(b71ca371-5bfd-11e7-8c73-525400ba512b)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-692378583-bj9cf_kube-system\" network: Multus: error in invoke Delegate add - \"flannel\": open /run/flannel/subnet.env: no such file or directory"

  59s	3s	7	kubelet, kubeadm-1		Normal	SandboxChanged	Pod sandbox changed, it will be killed and re-created.
  58s	3s	6	kubelet, kubeadm-1		Warning	FailedSync	Error syncing pod, skipping: failed to "KillPodSandbox" for "b71ca371-5bfd-11e7-8c73-525400ba512b" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"kube-dns-692378583-bj9cf_kube-system\" network: Multus: Err in  reading the delegates: failed to read container data in the path(\"/var/lib/cni/multus/4698a030f133a157372639d57e2ec42bb64bbc9aa6071340b544ea9dcfc6534e\"): open /var/lib/cni/multus/4698a030f133a157372639d57e2ec42bb64bbc9aa6071340b544ea9dcfc6534e: no such file or directory"

I see the flanneld process running, but it is logging:

E0628 12:59:35.847345 1 network.go:102] failed to register network: failed to acquire lease: the server does not allow access to the requested resource (patch nodes kubeadm-1)

Any ideas?

pmichali commented Jun 28, 2017

@dougbtv Thanks for the link. I took your configs, changed the interface from eth0 to eth1 (as in a VM and want to use use eth1 for network), IPs from 192.168.122.x to 192.168.2.x to match the network used on eth1, and added "--iface=eth1" to the flanneld line.

After init cluster, I applied the flannel-rbac.yaml and then multus.yaml. Is that right?

This time, the Multus pods came up OK (2/2). However, DNS pod is in ContainerCreating state. Describe output shows (even after deleting and letting it restart):

  1m		1m		1	default-scheduler			Normal		Scheduled	Successfully assigned kube-dns-692378583-bj9cf to kubeadm-1
  1m		58s		2	kubelet, kubeadm-1			Warning		FailedSync	Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-692378583-bj9cf_kube-system(b71ca371-5bfd-11e7-8c73-525400ba512b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-692378583-bj9cf_kube-system(b71ca371-5bfd-11e7-8c73-525400ba512b)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-692378583-bj9cf_kube-system\" network: Multus: error in invoke Delegate add - \"flannel\": open /run/flannel/subnet.env: no such file or directory"

  59s	3s	7	kubelet, kubeadm-1		Normal	SandboxChanged	Pod sandbox changed, it will be killed and re-created.
  58s	3s	6	kubelet, kubeadm-1		Warning	FailedSync	Error syncing pod, skipping: failed to "KillPodSandbox" for "b71ca371-5bfd-11e7-8c73-525400ba512b" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"kube-dns-692378583-bj9cf_kube-system\" network: Multus: Err in  reading the delegates: failed to read container data in the path(\"/var/lib/cni/multus/4698a030f133a157372639d57e2ec42bb64bbc9aa6071340b544ea9dcfc6534e\"): open /var/lib/cni/multus/4698a030f133a157372639d57e2ec42bb64bbc9aa6071340b544ea9dcfc6534e: no such file or directory"

I see the flanneld process running, but it is logging:

E0628 12:59:35.847345 1 network.go:102] failed to register network: failed to acquire lease: the server does not allow access to the requested resource (patch nodes kubeadm-1)

Any ideas?

@dougbtv

This comment has been minimized.

Show comment
Hide comment
@dougbtv

dougbtv Jun 28, 2017

Member

Those multus pods are, I think, badly labelled flannel pods from a find-and-replace that was too liberal when I created the reference config, just an FYI.

Sort of seems like maybe the DNS pod couldn't be removed, that is... with the line:

  58s	3s	6	kubelet, kubeadm-1		Warning	FailedSync	Error syncing pod, skipping: failed to "KillPodSandbox" for "b71ca371-5bfd-11e7-8c73-525400ba512b" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"kube-dns-692378583-bj9cf_kube-system\" network: Multus: Err in  reading the delegates: failed to read container data in the path(\"/var/lib/cni/multus/4698a030f133a157372639d57e2ec42bb64bbc9aa6071340b544ea9dcfc6534e\"): open /var/lib/cni/multus/4698a030f133a157372639d57e2ec42bb64bbc9aa6071340b544ea9dcfc6534e: no such file or directory"

I'm unsure what to do as a next step. Maybe try backing out all of the CNI config, removing the pods, and starting from scratch again.

Member

dougbtv commented Jun 28, 2017

Those multus pods are, I think, badly labelled flannel pods from a find-and-replace that was too liberal when I created the reference config, just an FYI.

Sort of seems like maybe the DNS pod couldn't be removed, that is... with the line:

  58s	3s	6	kubelet, kubeadm-1		Warning	FailedSync	Error syncing pod, skipping: failed to "KillPodSandbox" for "b71ca371-5bfd-11e7-8c73-525400ba512b" with KillPodSandboxError: "rpc error: code = 2 desc = NetworkPlugin cni failed to teardown pod \"kube-dns-692378583-bj9cf_kube-system\" network: Multus: Err in  reading the delegates: failed to read container data in the path(\"/var/lib/cni/multus/4698a030f133a157372639d57e2ec42bb64bbc9aa6071340b544ea9dcfc6534e\"): open /var/lib/cni/multus/4698a030f133a157372639d57e2ec42bb64bbc9aa6071340b544ea9dcfc6534e: no such file or directory"

I'm unsure what to do as a next step. Maybe try backing out all of the CNI config, removing the pods, and starting from scratch again.

@rkamudhan

This comment has been minimized.

Show comment
Hide comment
@rkamudhan

rkamudhan Jun 28, 2017

Contributor

Hi @dougbtv @pmichali . We have to fix this issue thrown out by Multus. Multus will store the delegate data using infra ID and later use them for deleting. Please clean up all the pod, and fix flannel plugin issue. Multus will work fine then.

  1m		58s		2	kubelet, kubeadm-1			Warning		FailedSync	Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-692378583-bj9cf_kube-system(b71ca371-5bfd-11e7-8c73-525400ba512b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-692378583-bj9cf_kube-system(b71ca371-5bfd-11e7-8c73-525400ba512b)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-692378583-bj9cf_kube-system\" network: Multus: error in invoke Delegate add - \"flannel\": open /run/flannel/subnet.env: no such file or directory"```
Contributor

rkamudhan commented Jun 28, 2017

Hi @dougbtv @pmichali . We have to fix this issue thrown out by Multus. Multus will store the delegate data using infra ID and later use them for deleting. Please clean up all the pod, and fix flannel plugin issue. Multus will work fine then.

  1m		58s		2	kubelet, kubeadm-1			Warning		FailedSync	Error syncing pod, skipping: failed to "CreatePodSandbox" for "kube-dns-692378583-bj9cf_kube-system(b71ca371-5bfd-11e7-8c73-525400ba512b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"kube-dns-692378583-bj9cf_kube-system(b71ca371-5bfd-11e7-8c73-525400ba512b)\" failed: rpc error: code = 2 desc = NetworkPlugin cni failed to set up pod \"kube-dns-692378583-bj9cf_kube-system\" network: Multus: error in invoke Delegate add - \"flannel\": open /run/flannel/subnet.env: no such file or directory"```
@pmichali

This comment has been minimized.

Show comment
Hide comment
@pmichali

pmichali Jun 30, 2017

@rkamudhan Can you elaborate on the "flannel plugin issue" an how that can be fixed? Is it a configuration issue, or some problem with the plugin. Afraid I don't have enough of an understanding to know how to proceed here.

@rkamudhan Can you elaborate on the "flannel plugin issue" an how that can be fixed? Is it a configuration issue, or some problem with the plugin. Afraid I don't have enough of an understanding to know how to proceed here.

@rkamudhan

This comment has been minimized.

Show comment
Hide comment
@rkamudhan

rkamudhan Jul 5, 2017

Contributor

Hi @pmichali , the flanneld is not running in your set up. Please install flanneld in your node. It will create the /run/flannel/subnet.env , Flannel use this file to get the IPAM information.

Contributor

rkamudhan commented Jul 5, 2017

Hi @pmichali , the flanneld is not running in your set up. Please install flanneld in your node. It will create the /run/flannel/subnet.env , Flannel use this file to get the IPAM information.

@yuyangbj

This comment has been minimized.

Show comment
Hide comment
@yuyangbj

yuyangbj Jan 12, 2018

@dougbtv, my question is does the multus-cni only support flannel? Or every CNI plugin could be configured? Another question is in multus cni conf, you use macvlan/ipvlan, is it configured to other cni plugin?

@dougbtv, my question is does the multus-cni only support flannel? Or every CNI plugin could be configured? Another question is in multus cni conf, you use macvlan/ipvlan, is it configured to other cni plugin?

@dougbtv

This comment has been minimized.

Show comment
Hide comment
@dougbtv

dougbtv Jan 12, 2018

Member
Member

dougbtv commented Jan 12, 2018

@RahulG115

This comment has been minimized.

Show comment
Hide comment
@RahulG115

RahulG115 Apr 4, 2018

can we use pci passthrough using multus in pods ? if yes , what extra config we need to do?

can we use pci passthrough using multus in pods ? if yes , what extra config we need to do?

@dougbtv dougbtv closed this Apr 19, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment