Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Zstd experiment #1

Merged
merged 5 commits into from
Sep 22, 2024
Merged

Zstd experiment #1

merged 5 commits into from
Sep 22, 2024

Conversation

ericvolp12
Copy link
Collaborator

Findings in this test conclude that enabling ZSTD compression for clients reduces BW per message by ~31.5% but increases CPU load an incredible amount.

In the below CPU profiles, Jetstream is serving from the playback buffer as fast as it can to a single client.

  • The first profile is without compression, serving at 198k evt/sec
  • The second profile is with compression, serving at ~28k evt/sec

The CPU costs of compression seems too high for now for me to want to support. The bandwidth usage of Jetstream even with hundreds of consumers is still well below 1Gbps for now and running multiple instances makes it relatively easy to scale out Jetstream horizontally for more consumers if needed.

This was a fun experiment though!

chrome_9tkVljW13a
chrome_Iu6H43rclo

@ericvolp12
Copy link
Collaborator Author

I made some changes to use a trained dictionary for compression and also modified Jetstream to only compress events once and then store events in both Raw JSON and in compressed JSON in another PebbleDB. This should allow for replay without any round tripping through an encoder/decoder on the server side and allow for emitting events having only compressed them once for all consumers.

@ericvolp12 ericvolp12 merged commit 3a83dcb into main Sep 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant