Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't write frame size large than 2000 via nff-go with igb_uio in AWS EC2 #575

Closed
guesslin opened this issue Mar 21, 2019 · 7 comments
Closed
Assignees

Comments

@guesslin
Copy link
Contributor

guesslin commented Mar 21, 2019

Hi, I hit a problem that I can't write a jumbo size frame to network interface even I can read it from there on AWS EC2, we just use the default configuration from network interface which set the MTU to 9001 (see Jumbo frame instances )

  • linux kernel version: 4.4.0-142-generic

  • nff-go version: 0.7.0

  • ethtool -i ens6

driver: ena
version: 2.0.2K
firmware-version:
expansion-rom-version:
bus-info: 0000:00:06.0
supports-statistics: yes
supports-test: no
supports-eeprom-access: no
supports-register-dump: no
supports-priv-flags: no
  • ifconfig
ens5      Link encap:Ethernet  HWaddr 06:01:77:20:8f:c2
          inet addr:10.1.218.18  Bcast:10.1.255.255  Mask:255.255.0.0
          inet6 addr: fe80::401:77ff:fe20:8fc2/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:213 errors:0 dropped:0 overruns:0 frame:0
          TX packets:142 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:23528 (23.5 KB)  TX bytes:20784 (20.7 KB)

ens6      Link encap:Ethernet  HWaddr 06:e4:c5:b9:5e:fc
          inet addr:10.1.189.85  Bcast:10.1.255.255  Mask:255.255.0.0
          inet6 addr: fe80::4e4:c5ff:feb9:5efc/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:9001  Metric:1
          RX packets:6 errors:0 dropped:0 overruns:0 frame:0
          TX packets:17 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:800 (800.0 B)  TX bytes:2126 (2.1 KB)

lo        Link encap:Local Loopback
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:211 errors:0 dropped:0 overruns:0 frame:0
          TX packets:211 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1
          RX bytes:15216 (15.2 KB)  TX bytes:15216 (15.2 KB)
  • lspci
00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma]
00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II]
00:01.3 Non-VGA unclassified device: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
00:03.0 VGA compatible controller: Device 1d0f:1111
00:04.0 Non-Volatile memory controller: Device 1d0f:8061
00:05.0 Ethernet controller: Device 1d0f:ec20
00:06.0 Ethernet controller: Device 1d0f:ec20
@guesslin
Copy link
Contributor Author

more result from iperf3 test

  • set MMS to 1990, then iperf3 test can send through our vm without problem
ubuntu@ip-10-1-215-30:~$ iperf3 -p 3091 -c 10.2.15.142 -M 1990
Connecting to host 10.2.15.142, port 3091
[  4] local 10.1.215.30 port 50630 connected to 10.2.15.142 port 3091
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  43.6 MBytes   365 Mbits/sec   23    240 KBytes
[  4]   1.00-2.00   sec  42.6 MBytes   358 Mbits/sec   18    315 KBytes
[  4]   2.00-3.00   sec  41.4 MBytes   347 Mbits/sec   33    348 KBytes
[  4]   3.00-4.00   sec  44.4 MBytes   372 Mbits/sec   61    238 KBytes
[  4]   4.00-5.00   sec  41.8 MBytes   351 Mbits/sec   11    321 KBytes
[  4]   5.00-6.00   sec  39.8 MBytes   334 Mbits/sec    0    431 KBytes
[  4]   6.00-7.00   sec  43.6 MBytes   366 Mbits/sec  160    267 KBytes
[  4]   7.00-8.00   sec  40.4 MBytes   339 Mbits/sec   29    243 KBytes
[  4]   8.00-9.00   sec  42.1 MBytes   353 Mbits/sec   25    232 KBytes
[  4]   9.00-10.00  sec  46.4 MBytes   389 Mbits/sec    0    390 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec   426 MBytes   357 Mbits/sec  360             sender
[  4]   0.00-10.00  sec   424 MBytes   356 Mbits/sec                  receiver

iperf Done.
  • set MMS to 2000, iperf3 test can't get same result. But we do see packet in the userspace, and try to write it back
ubuntu@ip-10-1-215-30:~$ iperf3 -p 3091 -c 10.2.15.142 -M 2000
Connecting to host 10.2.15.142, port 3091
[  4] local 10.1.215.30 port 50634 connected to 10.2.15.142 port 3091
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  87.4 KBytes   715 Kbits/sec    2   1.94 KBytes
[  4]   1.00-2.00   sec  0.00 Bytes  0.00 bits/sec    1   1.94 KBytes
[  4]   2.00-3.00   sec  0.00 Bytes  0.00 bits/sec    0   1.94 KBytes
[  4]   3.00-4.00   sec  0.00 Bytes  0.00 bits/sec    1   1.94 KBytes
[  4]   4.00-5.00   sec  0.00 Bytes  0.00 bits/sec    0   1.94 KBytes
[  4]   5.00-6.00   sec  0.00 Bytes  0.00 bits/sec    0   1.94 KBytes
[  4]   6.00-7.00   sec  0.00 Bytes  0.00 bits/sec    1   1.94 KBytes
[  4]   7.00-8.00   sec  0.00 Bytes  0.00 bits/sec    0   1.94 KBytes
[  4]   8.00-9.00   sec  0.00 Bytes  0.00 bits/sec    0   1.94 KBytes
[  4]   9.00-10.00  sec  0.00 Bytes  0.00 bits/sec    0   1.94 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  87.4 KBytes  71.6 Kbits/sec    5             sender
[  4]   0.00-10.00  sec  0.00 Bytes  0.00 bits/sec                  receiver

iperf Done.

@ifilippov
Copy link
Contributor

Hi guesslin,

Currently we don't support Jumbo frames (we can add this in our TODO list if you need it).

It seems that the problem is the size of mbuf in DPDK. As a quick workaround you can try to change RTE_MBUF_DEFAULT_BUF_SIZE (which is 2048) to 10000 in createMempool function in low/low.h

I will now do some deep research about supporting jumbo frames.

@ifilippov ifilippov self-assigned this Mar 21, 2019
@ifilippov
Copy link
Contributor

Hi guesslin,

I suppose that my previous workaround will not work due to network cards limitations. Also it is quite inefficient from memory usage point of view.

After some investigation I found that it is common to implement Jumbo frames support via chaining packets. I will have a patch in a couple days. It will use "Next" pointer of a packet structure for access to next part of data. Are you OK with this solution?

@guesslin
Copy link
Contributor Author

guesslin commented Mar 22, 2019

Hi @ifilippov thanks for the quick replies.

It seems that the problem is the size of mbuf in DPDK. As a quick workaround you can try to change RTE_MBUF_DEFAULT_BUF_SIZE (which is 2048) to 10000 in createMempool function in low/low.h

I'll try to patch for it in our local test :)

After some investigation I found that it is common to implement Jumbo frames support via chaining packets. I will have a patch in a couple days. It will use "Next" pointer of a packet structure for access to next part of data. Are you OK with this solution?

As fast as we can handle the jumbo frame it's good to me. Just some ideas about it.
If I understand the Next correctly then our program has to combine/break the jumbo frame packet?
So would it be better the nff-go could allocate mbuf by the MTU configuration from network cards?

@ifilippov
Copy link
Contributor

Hi guesslin,

you can check develop branch after 5a6b5d6 commit. You can use ether memory jumbo or chained jumbo.

You should enable jumbo in config for initialization like here https://github.com/intel-go/nff-go/blob/develop/examples/jumbo.go

Feel free to report any problems with new functionality.

@guesslin
Copy link
Contributor Author

guesslin commented Oct 8, 2020

Hi @ifilippov
I tried the memory_jumbo in v0.9.2, but encounter another problem see #718

@guesslin
Copy link
Contributor Author

Hi @ifilippov, I tried the v0.9.2 on AWS with the MemoryJumbo enabled. It's working well. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants