Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

SubscriptionReciever should recieve messages using async overload #158

Closed
manikrish opened this Issue Apr 17, 2012 · 9 comments

Comments

Projects
None yet
2 participants
Contributor

manikrish commented Apr 17, 2012

Current code:
// Long polling here?
var message = this.client.Receive(TimeSpan.FromSeconds(10));

            if (message == null)
            {
                Thread.Sleep(100);
                continue;

This is not good practice, as per appfabric CAT team:

While waiting for new messages either on a Service Bus queue or subscription, your solution will often be issuing a polling request. Fortunately, the Service Bus offers a long-polling receive operation which maintains a connection to the server until a message arrives on a queue or the specified timeout period has elapsed, whichever occurs first. If a long-polling receive is performed synchronously, it will block the CLR thread pool thread while waiting for a new message, which is not considered optimal. The capacity of the CLR thread pool is generally limited; hence there is good reason to avoid using the thread pool for particularly long-running operations.

To build a truly effective messaging solution using the Service Bus brokered messaging API, you should always perform the receive operation asynchronously.

Contributor

kzu commented Apr 17, 2012

That code is already running on an async task. Pointless to double-async.
On Apr 16, 2012 9:03 PM, "manikrish" <
reply@reply.github.com>
wrote:

Current code:
// Long polling here?
var message = this.client.Receive(TimeSpan.FromSeconds(10));

           if (message == null)
           {
               Thread.Sleep(100);
               continue;

This is not good practice, as per appfabric CAT team:

While waiting for new messages either on a Service Bus queue or
subscription, your solution will often be issuing a polling request.
Fortunately, the Service Bus offers a long-polling receive operation which
maintains a connection to the server until a message arrives on a queue or
the specified timeout period has elapsed, whichever occurs first. If a
long-polling receive is performed synchronously, it will block the CLR
thread pool thread while waiting for a new message, which is not considered
optimal. The capacity of the CLR thread pool is generally limited; hence
there is good reason to avoid using the thread pool for particularly
long-running operations.

To build a truly effective messaging solution using the Service Bus
brokered messaging API, you should always perform the receive operation
asynchronously.


Reply to this email directly or view it on GitHub:
mspnp#158

Contributor

manikrish commented Apr 17, 2012

Closing

@manikrish manikrish closed this Apr 17, 2012

@manikrish manikrish reopened this Apr 24, 2012

Contributor

manikrish commented Apr 24, 2012

Its still sync processing of requests , looking at the code we start off a single task asynchronously, when we call start - this thread/async task loops and recieves and process the messages 1 at a time. This is inefficient and still a sync recieve where we recieve only 1 message at a time.

Contributor

kzu commented Apr 25, 2012

The process can scale our to as many worker roles as we want.

Otherwise, we risk receving too much, enqueueing a ton of async work and
deplete the queue for other processes to pick up work.

/kzu

Daniel Cazzulino | Developer Lead | XML MVP | Clarius Consulting | +1
425.329.3471

On Tue, Apr 24, 2012 at 18:42, manikrish <
reply@reply.github.com

wrote:

Its still sync processing of requests , looking at the code we start off a
single task async when we call start - this thread/async task loops and
recieves and process the messages 1 at a time. This is inefficient and
still a sync recieve where we process only 1 request at a time.


Reply to this email directly or view it on GitHub:
mspnp#158 (comment)

Contributor

manikrish commented May 1, 2012

I would want 1 worker role to process as much messages as possible since I am paying for the roles, and autoscale only when the queue is having too many messages.

Contributor

kzu commented May 1, 2012

ok. Grigori would have to make that call.

The fact that you take the message out of the queue and place in a task
queue/pool doesn't guarantee you that you will process it faster than
scaling out. You can't just grab everything and queue in-memory in the
task/thread pool infinitely....

/kzu from mobile
On Apr 30, 2012 5:25 PM, "manikrish" <
reply@reply.github.com>
wrote:

I would want 1 worker role to process as much messages as possible since I
am paying for the roles, and autoscale only when the queue is having too
many messages.


Reply to this email directly or view it on GitHub:
mspnp#158 (comment)

Contributor

manikrish commented May 1, 2012

To clarify my comments earlier and this bug- for a real app we either would need to test out both the approaches and compare the cpu usage with the number of messages processed or some metric similar to that or we would have to just follow the best practice as per msdn docs and cat team which seems to be to use async. i don't mean that we should be using autoscaling etc.

Contributor

kzu commented May 1, 2012

"Use async" alone doesn't work. You can receive a million messages and
"just use async" and all that would do is enqueue a million async tasks in
memory that may complete in a week at full CPU usage. At which point you
can't even scale out because the messages have already been consumed by the
"just async" enqueuing on that single ultra-eager process.

Batching would be absolutely essential if we went that route, at the very
minimum (I.e. keep a batch of max. 20 in-flight messages in memory in the
task queue).

/kzu from mobile
On Apr 30, 2012 10:10 PM, "manikrish" <
reply@reply.github.com>
wrote:

To clarify my comments earlier and this bug- for a real app we either
would need to test out both the approaches and compare the cpu usage with
the number of messages processed or some metric similar to that or we would
have to just follow the best practice as per CAT team which seems to be to
use async. i don't mean that we should be using autoscaling etc.


Reply to this email directly or view it on GitHub:
mspnp#158 (comment)

@manikrish manikrish closed this Jul 13, 2012

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment