Skip to content

messages::serialize: take/put buffers from/into a SerializeBufferPool #2823

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 4 commits into from
Jun 16, 2025

Conversation

Centril
Copy link
Contributor

@Centril Centril commented Jun 3, 2025

Description of Changes

Fixes #1858.

In ws_client_actor_inner, after having ws.feed(msg), keep underlying allocations around, and use a default size of 4KiB.

API and ABI breaking changes

None

Expected complexity level and risk

The changes are somewhat straight forward, and it's not a deep change, but it's in a fairly important place.

Testing

  • Run a perf test and inpect flamegraphs and friends.

@Centril Centril requested a review from gefjon as a code owner June 3, 2025 20:30
@Centril Centril force-pushed the centril/reuse-ws-msg-allocations branch from 16838fa to 0203008 Compare June 3, 2025 20:30
@Centril Centril requested a review from coolreader18 June 3, 2025 20:30
Copy link
Contributor

@gefjon gefjon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks reasonable. I'd prefer to have the pool size metric(s) in before we merge, so we can know that the pool isn't growing without bound. For these metrics, I'd recommend:

  • Number of buffers in the pool.
  • Total size in bytes of all buffers in the pool.
  • Number of newly-allocated buffers.

I think these would give us sufficient information to decide if we need to impose a cap, and what that cap should be if so.

Copy link
Collaborator

@coolreader18 coolreader18 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed this yesterday but didn't hit the button - LGTM, this is very straightforward yet effective! Wondering if there's a potential concern with use of a SegQueue, if we happen to allocate a bunch of memory at one point and then it hangs around forever, but choosing the right capacity for an ArrayQueue seems hard.

@coolreader18
Copy link
Collaborator

coolreader18 commented Jun 5, 2025

Is there a benefit between having a global queue and just reusing the same buffer per client? Given it seems that each client would only use 1 or 2 buffers, and then the buffer capacities are also more specialized to the specific client (and also the module) rather than growing towards the global maximum.

@coolreader18
Copy link
Collaborator

coolreader18 commented Jun 5, 2025

Something like this

@bfops bfops added release-any To be landed in any release window performance A PR/Issue related to improving performance of stdb labels Jun 9, 2025
@Centril
Copy link
Contributor Author

Centril commented Jun 10, 2025

Is there a benefit between having a global queue and just reusing the same buffer per client? Given it seems that each client would only use 1 or 2 buffers, and then the buffer capacities are also more specialized to the specific client (and also the module) rather than growing towards the global maximum.

A few benefits to global pools:

  • Less memory usage overall for the buffers (not a huge difference... at 1024 CCU we're talking around 8 MiB).
  • Better cache locality as the actual OS thread is not jumping between different buffers.
  • In case of clients leaving and joining frequently, the local buffers don't help while the global pool does.

That said, the benefit of local buffers is the avoided synchronization overhead.
I think we can try local buffers first. That way, we also don't have to care about metrics the same way.

@Centril Centril force-pushed the centril/reuse-ws-msg-allocations branch from 0203008 to de170b6 Compare June 10, 2025 17:43
@Centril Centril force-pushed the centril/reuse-ws-msg-allocations branch from de170b6 to 42dc169 Compare June 11, 2025 13:00
@Centril Centril requested a review from gefjon June 11, 2025 13:01
gefjon and others added 2 commits June 16, 2025 11:38
Not sure how this got missed, as it's used in `core/src/startup.rs`.
Perhaps a bad merge/rebase?
@gefjon gefjon added this pull request to the merge queue Jun 16, 2025
Merged via the queue into master with commit 1318e7e Jun 16, 2025
19 checks passed
@Centril Centril deleted the centril/reuse-ws-msg-allocations branch June 16, 2025 17:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
performance A PR/Issue related to improving performance of stdb release-any To be landed in any release window
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Remove unnecessary allocations in message serialization
4 participants