Skip to content

Adjust packet mbuf size to reflect MTU#609

Merged
guvenc merged 2 commits intomainfrom
feature/mempool_size
Sep 30, 2024
Merged

Adjust packet mbuf size to reflect MTU#609
guvenc merged 2 commits intomainfrom
feature/mempool_size

Conversation

@PlagueCZ
Copy link
Copy Markdown
Contributor

@PlagueCZ PlagueCZ commented Sep 26, 2024

Due to #606 we needed to increase the packet buffer size to adjust for Jumbo frames as that is what host-host communication is using. This in turn raised the required number of hugepages for dpservice pod from 4G to 16G on our compute nodes.

So instead I created two memory pools, one for 1500 MTU, the other for 9100 MTU.

In normal operation, dpservice only encounters 1500 MTU packets, yet the packet buffer size is set to RTE_MBUF_DEFAULT_BUF_SIZE (rte_mbuf.h):

#define RTE_MBUF_DEFAULT_DATAROOM       2048
#define RTE_MBUF_DEFAULT_BUF_SIZE       \
         (RTE_MBUF_DEFAULT_DATAROOM + RTE_PKTMBUF_HEADROOM)

So this PR lowers the buffer size to decrease memory requirements in normal mode and uses jumbo frames in pf1-proxy mode. Unfortunately dpservice-dump does not know which mode is active so it needs to use the bigger variant always, though it only uses a small ring buffer, so the allocation is not huge.

I have deployed the 1518 size on an OSC cluster and it has been running for two weeks now without visible problems.

The 9118 pool is tested in OSC, but not in the current state (i.e. using TAP device), I chose to do it this way to separate this PR from the big changes to the proxy.

@github-actions github-actions bot added size/XS enhancement New feature or request labels Sep 26, 2024
@PlagueCZ PlagueCZ force-pushed the feature/mempool_size branch from 742a9cd to 682d3f8 Compare September 26, 2024 00:39
@github-actions github-actions bot added size/S and removed size/XS labels Sep 26, 2024
@PlagueCZ PlagueCZ changed the title Decrease packet mbuf size to reflect MTU Adjust packet mbuf size to reflect MTU Sep 26, 2024
@PlagueCZ PlagueCZ marked this pull request as ready for review September 26, 2024 01:00
@PlagueCZ PlagueCZ requested a review from a team as a code owner September 26, 2024 01:00
@PlagueCZ PlagueCZ marked this pull request as draft September 26, 2024 12:54
@PlagueCZ PlagueCZ force-pushed the feature/mempool_size branch from 682d3f8 to e6c9316 Compare September 26, 2024 15:11
@github-actions github-actions bot added size/M and removed size/S labels Sep 26, 2024
@PlagueCZ PlagueCZ force-pushed the feature/mempool_size branch 2 times, most recently from b3a752c to fe1178c Compare September 26, 2024 16:14
@PlagueCZ PlagueCZ marked this pull request as ready for review September 26, 2024 16:32
@PlagueCZ PlagueCZ requested a review from guvenc September 27, 2024 13:28
@PlagueCZ
Copy link
Copy Markdown
Contributor Author

Some hard data from prometheus exporter:

  • old dpservice (with 900k packet memory pool) had HeapSize: 3G and AllocSize: 2.5G
  • new dpservice (with 350k packet memory pool) has HeapSize: 2G and AllocSize: 1.5G
  • new dpservice with pf1-proxy (thus another jumbo pool): has HeapSize: 3G and AllocSize: 2G

@PlagueCZ
Copy link
Copy Markdown
Contributor Author

I had to change the meson handling of ENABLE_ definitions (i.e. make them all visible to CPP) because now ENABLE_PF1_PROXY is in dpdk_layer structure, which in turn is accessed by gRPC C++ code and that was causing strange errors (as C++ of course used different structure definition)

Copy link
Copy Markdown
Contributor

@guvenc guvenc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice that we could reduce the memory footprint of the dpservice.

@PlagueCZ PlagueCZ force-pushed the feature/mempool_size branch from f24c7fd to 6d6d43c Compare September 27, 2024 22:21
@guvenc guvenc merged commit 6d6d43c into main Sep 30, 2024
@guvenc guvenc deleted the feature/mempool_size branch September 30, 2024 08:46
@hardikdr hardikdr added this to Roadmap Jun 26, 2025
@hardikdr hardikdr moved this to Done in Roadmap Oct 15, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants