Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sriov cni network status is not correctly updated in pod annotation #87

Closed
zshi-redhat opened this issue Jun 17, 2019 · 8 comments
Closed

Comments

@zshi-redhat
Copy link
Collaborator

zshi-redhat commented Jun 17, 2019

Multus supports updating network status from each delegated CNI plugins to pod annotation, for example:

# kubectl describe pod testpod1
Name:               testpod1
Namespace:          default
Priority:           0
PriorityClassName:  <none>
Node:               worker-0/10.19.111.16
Start Time:         Sun, 16 Jun 2019 22:35:14 -0400
Labels:             <none>
Annotations:        k8s.v1.cni.cncf.io/networks: sriov-net1
                    k8s.v1.cni.cncf.io/networks-status:
                      [{
                          "name": "",
                          "ips": [
                              "10.96.1.110"
                          ],
                          "default": true,
                          "dns": {}
                      },{
                          "name": "sriov-network",
                          "dns": {}
                      }]
Status:             Running
IP:                 10.96.1.110

But sriov-cni network status is not correctly updated, as above, there are only name and dns fields appear in pod annotation for sriov-network, others such as ip, mac etc are missing.

Tested with

  1. Multus latest master: 1f8b44c575ee60f86ec99decd008a2328586952d
  2. SR-IOV CNI latest master: 968b85e
@ahalimx86
Copy link
Collaborator

@zshi-redhat I suspect the VFs are in DPDK mode or there's no IPAM config in the net CRD, can you please confirm?

@vpickard
Copy link
Contributor

vpickard commented Jul 1, 2019

I see the same for network status, and this is a pod where the VFs are in DPDK mode.

But, we still need this info (especially MAC) for DPDK mode so that the app can use the MAC assigned to the VF and not have to disable spoof checking on the VF.

[root@vpickard-k8s deployments]# kubectl describe pod pod-dpdk
Name: pod-dpdk
Namespace: default
Priority: 0
PriorityClassName:
Node: nfvsdn-20.oot.lab.eng.rdu2.redhat.com/10.8.125.30
Start Time: Mon, 01 Jul 2019 23:44:13 +0000
Labels:
Annotations: k8s.v1.cni.cncf.io/networks: sriov-net1, sriov-net1
k8s.v1.cni.cncf.io/networks-status:
[{
"name": "cbr0",
"ips": [
"10.244.1.120"
],
"default": true,
"dns": {}
},{
"name": "sriov-network",
"dns": {}
},{
"name": "sriov-network",
"dns": {}
}]
Status: Running
IP: 10.244.1.120

@mJace
Copy link

mJace commented Sep 17, 2019

Same here,
This happened when I upgrade my sriov-cni from v1.0 to v2.1.
And all the rest of the plugin version is the same. The only one difference is sriov-cni version.
The ifconfig in my pod is able to show vf IP, but not annotation for sriov-cni v2.1.

root@testpod2:/# ifconfig 
eth0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1450
        inet 10.233.65.6  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 0a:58:0a:e9:41:06  txqueuelen 0  (Ethernet)
        RX packets 12487  bytes 18099068 (18.0 MB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 8424  bytes 561254 (561.2 KB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

sriov-a: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 10.56.217.171  netmask 255.255.255.0  broadcast 0.0.0.0
        ether 7e:f0:99:7e:f6:b2  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

@ahalim-intel My VF is not in DPDK mode.
And here is my net-attach-def

apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: sriov-net-a
  annotations:
    k8s.v1.cni.cncf.io/resourceName: intel.com/sriov_pool
spec:
  config: '{
  "type": "sriov",
  "vlan": 1000,
  "if0name": "sriov-a",
  "ipam": {
    "type": "host-local",
    "subnet": "10.56.217.0/24",
    "rangeStart": "10.56.217.171",
    "rangeEnd": "10.56.217.181",
    "routes": [{
      "dst": "0.0.0.0/0"
    }],
    "gateway": "10.56.217.1"
    }
  }'

@ahalimx86
Copy link
Collaborator

@mJace, @vpickard For network status to work properly you need to add "cniVersion": "0.3.1" in your network CR.
Please see the sample CR here.

As a side note "if0name" field is deprecated in latest sriov-cni.

@mJace
Copy link

mJace commented Sep 17, 2019

@ahalim-intel Thank you.
Adding cniVersion works!

@vpickard
Copy link
Contributor

@ahalim-intel Thanks!

@killianmuldoon
Copy link
Collaborator

@zshi-redhat we can close this, right?

@zshi-redhat
Copy link
Collaborator Author

@zshi-redhat we can close this, right?

Yes, I don't see this issue anymore with latest sriov-cni when cniVersion is set correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants