Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request] Support disable_new_netns with Kube-OVN CNI. #4914

Closed
xujunjie-cover opened this issue Aug 15, 2022 · 2 comments
Closed

[Feature Request] Support disable_new_netns with Kube-OVN CNI. #4914

xujunjie-cover opened this issue Aug 15, 2022 · 2 comments
Labels
feature New functionality needs-review Needs to be assessed by the team.

Comments

@xujunjie-cover
Copy link
Contributor

Background

Kube-OVN can provide tap or vhostuserclinet device for Kata Containers. Kata Containers can used these devices directly.
Kube-OVN support ovs-dpdk (datapath_type netdev).

Status

Detailed introduction link #1922

current data flow like this:
原架构

Proposal

Kata containers use tap devices in host NetNS. then data flow like this:
新架构
we can reduce network stack jumps.

control flow for using tap device directly:
control flow

the part of kube-ovn is same as provide vhost-user.

Need Kube-OVN to do

  • create dummy device to transmit network info.
  • create multiple vhost-user or tap device for one kata container.
  • support use tap or vhost-user device as frist nic.

Need Kata to change

  • add support for vhost-user with server type like this. "server=on"
-chardev socket,id=char-6ca95a82dad2acd6,path=/var/run/openvswitch/vhost_sockets/c3851524-a4bb-48bb-8878-98449e988b7e/hugepages/sock,server=on
  • add support: multiple nic queues for kata with multiple cpus. get better performance.
  • add support tap device in host netNs like this.
-netdev tap,id=network-0,vhost=on,ifname=tap_69adfedcf5c,downscript=no,script=no

Performance

TODO: we are doing test. More details will be shown here later.
some test using vhostuserclinet provided by ovs-dpdk:
ovs: 4c8g

  1. ovs-dpdk + dpdk app runing in kata
Tools: dpdk-testpmd
Version: v21.02
Driver: uio_pci_generic
Hugepages: 512 2M
cpu: 1c already set by cpu_manage. (Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz)
Env: two kata containers in same node.
Packet Len(bytes) pps(M) Throughput (Gbits/second)
64 6.5 3.1
1400 3.93 41
  1. ovs-dpdk + general app
Tools: qperf
Hugepages: 512 2M
cpu: 2c already set by cpu_manage. (Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz)

we thought the performance limitations is from qperf use cpu.
two kata containers in same node

Packet Len(bytes) tcp_lat(us) Throughput
64 10.7 518 Mb/sec
256 9.7 1.36 Gb/sec
512 11 1.83 Gb/sec
1024 10.1 2.25 Gb/sec
2048 14.9 3.19 Gb/sec
4096 16.2 6.35 Gb/sec
8192 17.5 9.8 Gb/sec

two kata containers in different node, physical nic X710 10GbE SFP+

Packet Len(bytes) tcp_lat(us) Throughput
64 17 556 Mb/sec
256 17.1 1.67 Gb/sec
512 18.1 2.41 Gb/sec
1024 19.4 3.09 Gb/sec
2048 39.9 7.55 Gb/sec
4096 40.8 8.49 Gb/sec
8192 43.9 8.99 Gb/sec

Related Link:

kubeovn/kube-ovn#1811

@bergwolf
Copy link
Member

cc @kata-containers/architecture-committee

@oilbeater
Copy link

@xujunjie-cover do you have the comparison data before and after the optimization

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New functionality needs-review Needs to be assessed by the team.
Projects
Issue backlog
  
To do
Development

No branches or pull requests

3 participants