Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

doesn't seem to properly receive acks / nacks for published messages #93

Closed
nergdron opened this issue Feb 7, 2014 · 7 comments
Closed

Comments

@nergdron
Copy link

nergdron commented Feb 7, 2014

I just finished writing a little AMQP load testing client using this library, designed to publish messages at a specified rate and size so as to simulate measured production workloads. However, I'm running into a weird problem where I publish at a rate that exceeds my test VM's write capacity, and after about ~2000 messages, only ~800 show up in the queue, and I only get ~15 acks and 0 nacks back. I would expect to see ~800 acks and ~1200 nacks.

I can provide the code that's causing this problem, it should be easy to reproduce the issue.

@nergdron
Copy link
Author

nergdron commented Feb 7, 2014

Here's the code which demonstrates this issue:

http://pastebin.com/fiS0qHf0

@nergdron
Copy link
Author

nergdron commented Feb 7, 2014

Using the Java client tracer (https://www.rabbitmq.com/java-tools.html#tracer), I can see the acks being sent back, so they're just not making it to the ack channel:

1391814812725: ch#1 <- {#method<basic.ack>(delivery-tag=63, multiple=false), null, ""}
1391814812988: ch#1 <- {#method<basic.ack>(delivery-tag=64, multiple=false), null, ""}
1391814813454: ch#1 <- {#method<basic.ack>(delivery-tag=65, multiple=false), null, ""}
1391814813678: ch#1 <- {#method<basic.ack>(delivery-tag=66, multiple=false), null, ""}
1391814813886: ch#1 <- {#method<basic.ack>(delivery-tag=67, multiple=false), null, ""}
1391814814113: ch#1 <- {#method<basic.ack>(delivery-tag=68, multiple=false), null, ""}
1391814814315: ch#1 <- {#method<basic.ack>(delivery-tag=69, multiple=false), null, ""}

@nergdron
Copy link
Author

nergdron commented Feb 7, 2014

I thought this might be an issue with multithreading and using the same channel with multiple goroutines, but when I set runtime.GOMAXPROCS(1), the issue persists.

@rade
Copy link

rade commented Feb 8, 2014

You won't be seeing 1200 nacks since those messages aren't lost, they are just buffered somewhere due to backpressure / flow control. Obviously if you kill the connection then there's a good chance some of those messages will be lost.

But yes, if the tracer shows acks (or nacks) coming back and they don't make it to the application then that's a bug in the client.

@nergdron
Copy link
Author

nergdron commented Feb 8, 2014

Yep, I'm seeing a few hundred acks back in the tracer, but < 20 in the client. The run I'm talking about is in the 10m range, so I'd definitely expect even a VM to be able to deliver more than 20 1kB messages in that time.

@streadway
Copy link
Owner

At first glance, I thought the problem could be related to data races around your counters between goroutines. Use go run -race yourprogram.go to find out where those race conditions are.

I reproduced the stress test without races here: https://gist.github.com/streadway/8899006

When scheduling a goroutine per publish, regardless of using a buffered ack chan, I observed the rapid growth of outstanding publish acks like you describe. When I do not schedule a goroutine per publish I see a fairly constant number of outstanding acks per reporting interval.

In both cases, I observe that all acks eventually arrive in order within 1 second from the last publishing.

My hunch is that scheduler has a lower probability to schedule the single reporting/ack counter goroutine compared to one of the many waiting publish goroutines, so you'll end up buffering acks on the notify chan or the socket.

If you use a single goroutine for publishings do you still observe the large number of unconsumed acks?

@nergdron
Copy link
Author

Oh yeah, that was definitely the issue, my bad. Streamlining the code into single go routines does the trick, thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants