Skip to content

Add Zstd compression to the websocket messages #2846

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

ResuBaka
Copy link

@ResuBaka ResuBaka commented Jun 7, 2025

Description of Changes

This add the option to the websocket/subscribe endpoint so you can use Zstd as the compression in addition to None/Brotli/Gzip.

API and ABI breaking changes

None

Expected complexity level and risk

1

Testing

  • I have tested it with the simple chat example where I have enabled the zstd compression and added some logging to see if it was used
  • Testing for the compression speed to find the best level

Additional

What we could look into is to use zstd dictionary features to improve the performance even more as it could possible help with the base structure of each message. The only thing that would then needed to be done is have an extra option in the enum as an ZstdDict as the client and server would need to know about the dictionary.

@ResuBaka ResuBaka requested review from Centril and gefjon as code owners June 7, 2025 10:03
@CLAassistant
Copy link

CLAassistant commented Jun 7, 2025

CLA assistant check
All committers have signed the CLA.

@gefjon
Copy link
Contributor

gefjon commented Jun 10, 2025

What's the motivation for this change? Is there some environment you want to connect to SpacetimeDB from where Zstd compression is available, but GZip and Brotli are not?

@ResuBaka
Copy link
Author

The motivation is that in Zstd should be in general be faster when it comes to decompression. So it should at the least be better for clients.

For server it could cause a small amount of more cpu usages.

So I wanted to add it to have one more options for compression, which could depending on how big responses are be better then the other options. But to check it there would be an benchmark be needed, to see how general the speed differs from each compression option.

Do you guys currently have an easy way to benchmark mixed message compression/decompression speed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants