WIP: Better story about in flight messages and rate limit #141
Conversation
I didn't completely understand what you mean here but overall you solution looks much elegant and clean than what I did :). You seem to have cleaned up |
Or never mind. Prepend isn't a big change. We'll leave it there :) |
Ok. Imagine a client continuously publishing messages and the connection or broker isn't fast enough to handle them the rumqtt internal queue
Yeah. During this refactoring I removed |
The most crucial point to me is if I made something stupid when returning |
One way I think deadlock could happen is, if publish |
These changes already looks good to me. I'll merge this in a different branch and experiment with some test cases that I already wrote. Thanks a lot for the contribution :) |
Ok. Thanks. I didn't have time to work on this the last days. Last week I ran a couple of load tests with the patch without any issues but the tricky part are definitely reconnections and connections flaws. Let me know if there's anything I can do. |
No problem. I'll finish this :)
I just have a few doubts about stream progress when the request channel is full and stream |
Yes. This is legitime. I played today evening and and learnt that mixing Looking forward to your final patch! |
Hi.
the way
rumqtt
handles kind of overload situations seems not optimal to me. The current behaviour just blocks for a configured amount of time when the request channel is full. If the application sends any command during this delay,rumqtt
will block again when polling next time. This leeds to a rate of1/configured delay
. If the clients keeps sending you will stay at this rate.I changed this implementation and removed the fixed delay with a stream interception when the publication queue is at a configured level (
MqttOptions::in_flight
). The essential part is here:This affects any command on the request queue. I'm not sure if there is a definition for the in flight window and if it should contain e.g
Subscribe
s or others. I personally think it doesn't matter if they're included.Next the stream throttling implementation is also a little bit misleading. I replaced the sleep time that happen when a configured rate is reached with a simple StreamExt::throttle that is a prefect match.
This patch changes the client API.
What do you think? I consider this as work in progress and appreciate any feedback!
cheers!