Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use of dpdk0 interface #32

Closed
arunkumarsit opened this issue Jan 27, 2016 · 7 comments
Closed

Use of dpdk0 interface #32

arunkumarsit opened this issue Jan 27, 2016 · 7 comments

Comments

@arunkumarsit
Copy link

Hi Team,

During mtcp modified DPDK installation, after attaching the device to igb_uio driver there is a new kernel logical interface dpdk0 created.
So why this dpdk0 interface is created? And where it is created?
Can't we receive all the incoming packets from NIC port via dpdk_recv_pkts() call in mtcp ?

I am trying to add KNI support into mtcp-DPDK supported application. But mtcp creates dpdk0 interface and one more KNI interface vEth0_0 created.
I want to avoid creation of 2 interfaces. After receiving the packets in rte_eth_rx_burst() call, I want to check the type of packet (say TCP or UDP) and then pass them accordingly to user process (to mtcp library) or to send it to KNI interface (say UDP packets which are not supported by mtcp).

Any inputs/suggestions are welcome...

Thanks,
Arun

@jagsnn
Copy link

jagsnn commented Jan 27, 2016

So why this dpdk0 interface is created? And where it is created?

Intention seems to be to view the statistics of the dpdk attached physical interface using tools such a ifconfig and ethtool. In non modified dpdk, the interface becomes invisible once detached from ixgbe driver and attached to igb_uio driver. It looks to be a "netdev" interface created in Kernel. May be run through the code at modified dpdk-2.1.0/lib/librte_eal/linuxapp/igb_uio/igb_uio.c to play around and get a better understanding.

Not sure of .. Can't we receive all the incoming packets from NIC port via dpdk_recv_pkts() call in mtcp ?

"But mtcp creates dpdk0 interface and one more KNI interface vEth0_0 created. "

I don't think it creates both dpdk0 and vEth interface for KNI. As far as I know only the interfaces bound to the igb_uio module would create a netdev interface in kernel with dpdk#.

KNI app only creates vEth netdev interface in kernel. Whatever you may be viewing as dpdk0 may be an interface that you have bound using pci bind or the setup.sh tool.

Anyway for your app to receive packets in userspace, you will need a dpdk0, which is associated with a physical interface / virtual interface / emulated interface that will bring in packets from the network, and then once you decide it is a UDP packet, punt it to Kernel Stack via the vEth, which is the KNI interface. So there should not be any worry with dpdk0 and vEth showing up, since they are associated with different entities.

@arunkumarsit
Copy link
Author

Thanks for the reply. I want to clarify some points.

Clarification 1:
" Not sure of .. Can't we receive all the incoming packets from NIC port via dpdk_recv_pkts() call in mtcp ? "
My question is do we need dpdk0 interface to send/receive packets. Now, I can understand that dpdk0 is just needed for statistical information which can be fetched via Ethtool etc.

Clarification 2:
** "But mtcp creates dpdk0 interface and one more KNI interface vEth0_0 created. " **
I integrated KNI application with mtcp library. So I meant to say dpdk0 is created by mtcp and vEth0 is created by KNI module which are from the same DPDK process.

When I integated KNI with mtcp libraray, I can see the application had 2 interfaces say dpdk0 and vEth0. I assigned IP's for both the interfaces with 2 different subnet.
When I send packets from client, I have to check in dpdk_module (after receiving it in DPDK space) and then I have to either send it to application or to KNI interface.
But KNI is not accepting the packet somehow as it is not intended to that subnet.
When the packets sent to KNI interface IP, but the packet is not decided to sent to KNI, then the packet will be handled by application. But after receiving the packet, the mtcp library detects that there are no routes to the subnet (which is of KNI interface) and drops response.

Hope I am clear now at least.

@arunkumarsit
Copy link
Author

What will be the impact if I remove the creation of netdev dpdk0 from igb_uio.c file? Still mtcp library works with my DPDK application or with example epserver app ?

@bhpike65
Copy link

i think dpdk kni is not support multi thread right now, you should do some tricks in kni kernel module

@arunkumarsit
Copy link
Author

Yes, I will check on KNI for multi-threaded support.

But what will be the impact in mtcp applications if I remove the creation of netdev dpdk0.
can someone answer me in this?

@jagsnn
Copy link

jagsnn commented Jan 28, 2016

"My question is do we need dpdk0 interface to send/receive packets. "

If I guess it right, other than stats info update into kernel, mtcp also utilizes the ip address of the dpdk# interface created in kernel as source ip-address for egress traffic and to filter out ingress dest ip address, as well as for mtcp's ARP handling. mtcp may not just restrict to using ip address only, but may be using other interface parameters also from the dpdk# interface.

Your problem statement is much clearer now - you have two interfaces vEth and dpdk0 with ip addresses of different subnets. Since you are receiving packets via a single physical interface at the dpdk space, you are trying to figure out first of all when a client sends a packet to mtcp, to which IP address should the packet be destined to, and how to satisfy both vEth and dpdk0, as each interface drops the packet if the IP address is of the other.

mtcp looks to be picking the interface information from netdev dpdk interfaces created in the kernel, and building its interface table with ip address and hw address and so on, and dpdk rte populates its interface table after scanning pci information from the interfaces bound to it. The mtcp table and dpdk table looks to be one to one mapped based on hwaddress. So with that info, I think removing creation of dpdk interface in kernel will not help, as mtcp will not have any information on the the interfaces it is supposed to use. This is also with reference to the config file provided in mtcp_init(), which contains the dpdk# ports to be used by mtcp.

I would probably suggest you set only IP address for the vEth interface known to kernel stack, and for dpdk# leave it with no ip address. Tweak the SetInterfaceInfo() in mtcp\src\io_module.c to retrieve ip address of the vEth# interface instead of dpdk# interface and set it in the dpdk interface table, so that mtcp filters are happy as well as kernel stack. With this you utilize one ip address for both vEth and dpdk# (considering you have a one to one mapping between dpdk interface and vEth interface). This is just my suggestion, since you know your kni mtcp integrated code better, only you may be able to say if it will work or not, and probably give a try. Hopefully you are doing the kni init calls before mtcp_init() for this to work.

@arunkumarsit
Copy link
Author

I tweaked the mtcp code to use vEth0 interface details instead of dpdk0 interface. And now I am able to achieve what I need. The traffic is reaching either KNI interface or mtcp application based on my condition check in recv_pkts() function.
Thanks for your help !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants