-
Notifications
You must be signed in to change notification settings - Fork 620
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doesn't seem to properly receive acks / nacks for published messages #93
Comments
Here's the code which demonstrates this issue: |
Using the Java client tracer (https://www.rabbitmq.com/java-tools.html#tracer), I can see the acks being sent back, so they're just not making it to the ack channel: 1391814812725: ch#1 <- {#method<basic.ack>(delivery-tag=63, multiple=false), null, ""} |
I thought this might be an issue with multithreading and using the same channel with multiple goroutines, but when I set runtime.GOMAXPROCS(1), the issue persists. |
You won't be seeing 1200 nacks since those messages aren't lost, they are just buffered somewhere due to backpressure / flow control. Obviously if you kill the connection then there's a good chance some of those messages will be lost. But yes, if the tracer shows acks (or nacks) coming back and they don't make it to the application then that's a bug in the client. |
Yep, I'm seeing a few hundred acks back in the tracer, but < 20 in the client. The run I'm talking about is in the 10m range, so I'd definitely expect even a VM to be able to deliver more than 20 1kB messages in that time. |
At first glance, I thought the problem could be related to data races around your counters between goroutines. Use I reproduced the stress test without races here: https://gist.github.com/streadway/8899006 When scheduling a goroutine per publish, regardless of using a buffered ack chan, I observed the rapid growth of outstanding publish acks like you describe. When I do not schedule a goroutine per publish I see a fairly constant number of outstanding acks per reporting interval. In both cases, I observe that all acks eventually arrive in order within 1 second from the last publishing. My hunch is that scheduler has a lower probability to schedule the single reporting/ack counter goroutine compared to one of the many waiting publish goroutines, so you'll end up buffering acks on the notify chan or the socket. If you use a single goroutine for publishings do you still observe the large number of unconsumed acks? |
Oh yeah, that was definitely the issue, my bad. Streamlining the code into single go routines does the trick, thanks. |
I just finished writing a little AMQP load testing client using this library, designed to publish messages at a specified rate and size so as to simulate measured production workloads. However, I'm running into a weird problem where I publish at a rate that exceeds my test VM's write capacity, and after about ~2000 messages, only ~800 show up in the queue, and I only get ~15 acks and 0 nacks back. I would expect to see ~800 acks and ~1200 nacks.
I can provide the code that's causing this problem, it should be easy to reproduce the issue.
The text was updated successfully, but these errors were encountered: