-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow single frame writes #62
Comments
I would greatly appreciate if this could be implemented, as we're benchmarking the different websocket implementations for use with the |
If you want pure performance, you definitely want to go with I do not want to compromise this library's API early on without knowing how widespread this issue is or using benchmarks to see the speed improvement. You can use the https://github.com/nhooyr/websocket/tree/exp-single branch if you still want to use it. |
In terms of performance, seems to be decently faster but will analyze if its worth it later.
|
It's very confusing how at 4096 bytes, the buffered implementation is actually several thousand nanoseconds slower than the stream. |
Was just my MacBook CPU throttling lol. Here are the results on a GCP VM: nhooyr@anmol:~/go/src/nhooyr.io/websocket$ go test -bench=. -run=xxx
+ go test -bench=. -run=xxx
go: downloading github.com/google/go-cmp v0.2.0
go: downloading golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
go: extracting github.com/google/go-cmp v0.2.0
go: extracting golang.org/x/time v0.0.0-20190308202827-9d24e82272b4
goos: linux
goarch: amd64
pkg: nhooyr.io/websocket
BenchmarkConn/buffered/32-2 100000 22014 ns/op 1.45 MB/s
BenchmarkConn/buffered/128-2 100000 22281 ns/op 5.74 MB/s
BenchmarkConn/buffered/512-2 100000 22602 ns/op 22.65 MB/s
BenchmarkConn/buffered/1024-2 100000 22659 ns/op 45.19 MB/s
BenchmarkConn/buffered/4096-2 30000 45452 ns/op 90.12 MB/s
BenchmarkConn/buffered/16384-2 30000 51056 ns/op 320.90 MB/s
BenchmarkConn/buffered/65536-2 20000 81324 ns/op 805.86 MB/s
BenchmarkConn/buffered/131072-2 20000 94991 ns/op 1379.82 MB/s
BenchmarkConn/stream/32-2 50000 36525 ns/op 0.88 MB/s
BenchmarkConn/stream/128-2 50000 37903 ns/op 3.38 MB/s
BenchmarkConn/stream/512-2 50000 35850 ns/op 14.28 MB/s
BenchmarkConn/stream/1024-2 50000 36173 ns/op 28.31 MB/s
BenchmarkConn/stream/4096-2 30000 48555 ns/op 84.36 MB/s
BenchmarkConn/stream/16384-2 20000 62526 ns/op 262.03 MB/s
BenchmarkConn/stream/65536-2 10000 102465 ns/op 639.59 MB/s
BenchmarkConn/stream/131072-2 10000 169573 ns/op 772.95 MB/s
PASS
ok nhooyr.io/websocket 34.457s |
Given the significant difference for even 32 byte messages, at 65%, I think it makes sense to offer this API. Will expose it tomorrow. |
I am confused as to why though, there shouldn't be such a huge difference at such a small byte size 🤔 |
Ok my benchmarks sucked. Here are better ones:
So looks like at larger message sizes the difference is negligible but at smaller sizes, its very significant because each Writer/Reader call has to allocate the reader/writer because they're being put inside an interface value. Thats why the stream is allocating so much more every op. Its still not a large enough difference to warrant this coming in for performance reasons. 2000 nanoseconds isn't that much. The allocation overhead might matter but I'd like to wait to see other people's thoughts. |
I've also updated https://github.com/nhooyr/websocket/tree/exp-single |
@nhooyr cool, appreciate the effort, even if it's not something you plan on putting in the public API! We'll benchmark internally with |
I've decided against bringing this into the library. It's an unfortunate situation with chrome but I believe it's an outlier. For almost every single language, there is a robust WebSocket library available. See https://github.com/facundofarias/awesome-websockets or https://github.com/crossbario/autobahn-testsuite#users A cursory google search of websocket fragmentation not supported only brings up chrome. So for now, I would recommend you use gorilla or gobwas. If someone else runs into another stack in the wild that also does not support fragmentation or has issues with the performance, I'll reconsider this. |
Argh, so I thought about this some more and I think its a good idea to include a If you're just experimenting, it's frustrating to have to create a writer or a reader to just write or read a simple msg. The API is very minimal regardless so I think the trade off for the convenience is worth it as the godoc still reads well. |
OK. |
See #57
Some websocket implementations cannot handle fragmented messages and the current API only allows writing fragmented messages because you write all your data first to a writer and then call close which writes a continuation fin frame.
It may also be good for performance.
I'm not going to expose an API for this right now, opening this issue to see how badly its wanted.
The text was updated successfully, but these errors were encountered: