New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(hybridcloud) Deliver payloads concurrently #66870
Conversation
We're hitting the ceiling of how many messages we can deliver concurrently with single threaded workers. Because most of our time is spent in IO, we can get more throughput by sacrificing strong ordering and delivering messages in small concurrent batches. Should a batch contain a delivery that needs to be retried progress will be stopped. This means we could delivery n-1 messages out of order if we hit an intermittent error. Initially I only want to use parallel deliery for large mailboxes as they would naturally not have strong ordering due to the amount of concurrency they create. Both the usage of threaded io and the threadpool size can be controlled by options.
if payload_record.attempts >= MAX_ATTEMPTS: | ||
payload_record.delete() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a behavior difference from the single-threaded delivery. I was trying to avoid iterating records multiple times and having more complex logic. The downside is sending an 11th request.
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #66870 +/- ##
=======================================
Coverage 84.27% 84.27%
=======================================
Files 5308 5306 -2
Lines 237401 237334 -67
Branches 41066 41056 -10
=======================================
- Hits 200071 200025 -46
+ Misses 37112 37091 -21
Partials 218 218
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks parallel to me
We're hitting the ceiling of how many messages we can deliver concurrently with single threaded workers. Because most of our time is spent in IO, we can get more throughput by sacrificing strong ordering and delivering messages in small concurrent batches. Should a batch contain a delivery that needs to be retried progress will be stopped. This means we could delivery n-1 messages out of order if we hit an intermittent error. Initially I only want to use parallel delivery for large mailboxes as they would naturally not have strong ordering due to the amount of concurrency they create. Both the usage of threaded io and the threadpool size can be controlled by options.
We're hitting the ceiling of how many messages we can deliver concurrently with single threaded workers. Because most of our time is spent in IO, we can get more throughput by sacrificing strong ordering and delivering messages in small concurrent batches.
Should a batch contain a delivery that needs to be retried progress will be stopped. This means we could delivery n-1 messages out of order if we hit an intermittent error.
Initially I only want to use parallel delivery for large mailboxes as they would naturally not have strong ordering due to the amount of concurrency they create. Both the usage of threaded io and the threadpool size can be controlled by options.