Work queues guidance #670
Replies: 1 comment
-
|
Given IoT Core's 512K total bandwidth limit, a fully saturated client connection only uses a tiny fraction of a modern core/hardware thread, and there should be no problem (from a network data processing standpoint) seating hundreds of clients on a single thread. The SDK uses its own thread pool construct (event loop group) for all IO processing. If performance is paramount, it's critical that you never do any significant processing in the publish received callback. Anything you do there stalls every client bound to that thread. Instead, your workers should be a separate thread pool of your choosing and the message received callback's only duty is to submit the message to that separate thread pool. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I'm currently using the shared subscriptions feature to create work queues with clustered server applications that consume messages from multiple devices.
Whenever I need more throughput, I create additional clients and subscribe them to the same shared groups in order to parallelize the workload. However, it seems that this library limits the number of parallel threads, and at some point adding more consumers no longer brings any benefit to the application.
My suspicion is based on the fact that I only see a limited number of "AWS Event Loop" threads appearing in the logs (fewer than 10 different threads).
Can anyone provide insights on how to correctly implement work queues using this library?
Beta Was this translation helpful? Give feedback.
All reactions