New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MM-54998: Optimize JSON marshalling in websocket broadcast #25286
Conversation
Marshalling a json.RawMessage is not zero overhead. Instead, it compacts the raw message which starts to have an overhead at scale. golang/go#33422 Since we have full control over the message constructed, we can simply write the byte slice into the network stream. This gives considerable performance boost. ``` goos: linux goarch: amd64 pkg: github.com/mattermost/mattermost/server/public/model cpu: Intel(R) Core(TM) i5-8265U CPU @ 1.60GHz │ old.txt │ new_2.txt │ │ sec/op │ sec/op vs base │ EncodeJSON-8 1640.5n ± 2% 289.6n ± 1% -82.35% (p=0.000 n=10) │ old.txt │ new_2.txt │ │ B/op │ B/op vs base │ EncodeJSON-8 528.0 ± 0% 503.0 ± 0% -4.73% (p=0.000 n=10) │ old.txt │ new_2.txt │ │ allocs/op │ allocs/op vs base │ EncodeJSON-8 5.000 ± 0% 4.000 ± 0% -20.00% (p=0.000 n=10) ``` P.S. No concerns over changing the model API because we are still using 0.x https://mattermost.atlassian.net/browse/MM-54998 ```release-note Improve websocket event marshalling performance ```
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If only we had encoding/json/v2
already!
var seq int64 | ||
enc := json.NewEncoder(io.Discard) | ||
for i := 0; i < b.N; i++ { | ||
err = ev.Encode(enc) | ||
ev = ev.SetSequence(seq) | ||
err = ev.Encode(enc, io.Discard) | ||
seq++ | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need this here now? And why don't we use int64(i)
directly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It just makes things a bit more realistic as that's what happens here:
mattermost/server/channels/app/platform/web_conn.go
Lines 465 to 467 in ec4dc6b
evt = evt.SetSequence(wc.Sequence) | |
err = evt.Encode(enc) | |
wc.Sequence++ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good catch then :) Thanks for the clarification
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice! Looking forward to the load-tests 🚀
/e2e-test |
Successfully triggered E2E testing! |
e2e tests passed. Merging. |
Marshalling a json.RawMessage is not zero overhead. Instead,
it compacts the raw message which starts to have an overhead
at scale.
golang/go#33422
Since we have full control over the message constructed, we
can simply write the byte slice into the network stream.
This gives considerable performance boost.
P.S. No concerns over changing the model API because we are
still using 0.x
https://mattermost.atlassian.net/browse/MM-54998