-
-
Notifications
You must be signed in to change notification settings - Fork 576
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No limit on the message queue size, #23
Conversation
…t takes too long to send a message. Continuation of centrifugal#19.
Just ran a benchmark. First – without ring queue First column - amount of connected clients, second - nanoseconds between moment when one message sent into channel and moment when all clients received that message from channel:
Memory usage: 581mb And with queue:
Memory: 564mb As we can see memory usage reduced a little and performance for this task ~25% worse. GOMAXPROCS=1. With GOMAXPROCS=2 which is optimal for my old Mac Air: Without queue:
With queue:
Memory usage again a bit less, performance impact ~15% What do you think about this results? It's a bit upset to lose 20% but we theoretically get more predictable system |
Benchmarks the raw SubHub performance as well as client routing performance. Right now it will usually crash with internal server error until centrifugal#19 and centrifugal#23 is fixed.
Very interesting. I think the main difference comes from the send timeout. What are the numbers if you change the function to: func (c *client) sendMsgTimeout(msg string) error {
return c.sess.Send(msg)
} I think this would make it just as fast as the "old" code, but clients will no longer time out on send. |
Yep (compare with top most result GOMAXPROCS=1, no queue)
|
The buffer and separate goroutine can now also be removed from websocket. See commit 3bc068d - should speed up websocket quite a bit. So the question is, do we want:
|
No limit on the message queue size,
but instead disconnect users if it takes too long to send a message.
Continuation of #19.
TODO: