New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to ACK messages downstream from Source #13
Comments
Hi ! I would also require this feature. However I have no idea how to implement it as the acknowledgment is also link to the back pressure of the stream. |
Or maybe I am not understanding something.
Shouldn't the message stay in the queue due to back pressure of the flow ? Thanks in advance for your help. |
ACK before delivery was introduced in this commit: fdba020. Looks like this can be easily changed. Facts:
Issue:The way Possible solutions:
I'm open to discussion and other solutions. |
@vrabeux wrote:
Under current implementation:
Only 2 messages were acknowledged to the broker. Consumer is allowed to prefetch (without acknowledging) messages to improve throughput. Current implementation sets prefetch to 20.
Once the subscriber dies, the subscription is considered canceled, the
That is the current behaviour. |
@vrabeux Indeed there was a bug in code that handles exceptions from Generally speaking, the Rule 2.13:
|
@mkiedys Thanks for the reply. I was more talking about an app crash or instance crash. All prefetch messages should be re-queued. However, since they are acked before the work is completed, they are not re-queued. Anyway, that may be a particular use case. I agree that having a prefetch that is around 20 works in most cases. However, if we have a very slow service (lets say several seconds or worse minutes) a too big prefetch number can cause problems : a new worker would not be able to get some work if most works are prefetched by an other instance. On the contrary if the work is very fast, then 20 may even be a bit small. All of these prefetch number, distributed work loads, and streams are really connected together and it really seam that akka-streams can be a powerfull tool to implement it. For now, I implemented my own "Source" that correspond to my use case and I use reactive-rabbit later in the flow. It works so far but it is way too "naive" to really use it. |
The action of ACKing must be exposed to end-users of the library, since only the user of the stream knows exactly when an element should be considered as handled. Perhaps it would be possible to offer |
@ktoso Yes, that it why I decided to implement my own source : to be able to ack the message down the stream when I want it. The way I do it is however very "ugly" and "naive" since I need to wrap the rabbit Channel and delivery tag within the object I pass down the stream ... I don't even handle if the connection is closed, channel still alive or not etc ... The After all I am wondering if akka-stream is even the correct way to go for my need. |
@vrabeux wrote:
All messages are returned to the broker once the subscription is canceled. Messages are always acknowledged one-by-one after delivery to the stream. Message that sits in the buffer is not acknowledged.
If there are multiple consumers attatched to the queue than broker tries to do it best to share messages between them also taking into consideration if particular consumer is slow. So that should be not an issue. Have you tried coding example like this for your case and checking wherer one or all consumers get messages evenly? |
@ktoso wrote:
Usual workflow is:
Prefetch stands for how many unacknowledged messages can be delivered to the consumer before broker stops delivering. It's essentially a base on which the reactive component is built. I agree that it would be nice to have an option to ACK message deeper in the processing pipeline. My original plan for this scenario was to provide a pair of streams:
It's essentially a |
@mkiedys Sorry, maybe I am doing something wrong then or I may just dont understand how it works. If we look at QueueSubscription.scala#L40 if(channel.isOpen()) {
channel.basicAck(delivery.deliveryTag.underlying, false)
subscriber.onNext(delivery)
} This gave me the impression that the I'll try to get a small sample working this weekend. |
Message is delivered to the consumer after it was acknowledged to the broker. One by one. Improvement can be made to acknowledge after delivery - not before. |
Ok I am confused here. Didn't you say that :
Is it :
If it is 2, then I definitively have an issue on my side. If it is 1, then the flowing flow :
Does not work in my use case as messages are not re-queued. |
Usage example for a val connection = Connection()
val processor: Processor[Confirm, Delivery] = connection.processor(queue = "invoices")
Source(processor)
.map(delivery ⇒ // decide whether to acknowledge or requeue
if(Random.nextBoolean())
Ack(delivery.deliveryTag)
else
Requeue(delivery.deliveryTag))
.to(Sink(processor)) // tell to the broker |
@vrabeux wrote:
It might be that Akka Streams does additional buffering. In this case more than one message might get lost. |
@mkiedys Ok. I'll really need to experiment a bit more and try to understand more deeply all of this. The Source(inputQueue).map().map(ACK).map(...).to(aCompletelyDifferentOtherQueue)
Anyway thank you so much for your help and the talk. |
You would have to split your stream into two: one going to the @ktoso Is there a better solution? *) In AMQP, messages are never published to the queues directly. You always publish them via the exchange. |
Yes makes sens. Thanks. |
+1 I would really like to see this feature |
I recommend reading about Akka Streams buffering: http://doc.akka.io/docs/akka-stream-and-http-experimental/1.0-RC4/scala/stream-rate.html I you depend on driver to send acknowledgments one-by-one you may want to reduce buffer size from default 16 to 1: .withAttributes(Attributes.inputBuffer(initial = 1, max = 1)) Sadly the smallest buffer is 1 message. |
For what its worth I ended up creating my own Reactive Streams implementation that is in pure Java and does not Auto Ack (also because its in Java is consequently far less code than this project as I just used the RabbitMQ driver directly). public interface DeliveryAckSubscriber extends Subscriber<DeliveryResult> {
Delivery getDelivery();
}
public interface ReactiveAMQPOperations {
//rpc is for direct-reply
Publisher<Delivery> rpc(final Message m);
Publisher<DeliveryAckSubscriber> toPublisher(String queue, int prefetchCount);
}
public class DeliveryResult {
private final DeliveryResponse response; // for rpc
private final Throwable error; // if null no error
private final boolean requeue;
//constructor and getters snipped for brevity.
} Basically the idea is you get a As nifty and cute as it would be to have The If folks are interested I was going to open source it. I also have an pure RxJava implementation that is easier to work with as well. |
Thanks for inspiration Adam. I recommend running tests from |
It will take me a little time to get it opensource ready as its in our proprietary repository. That being said it is unfortunately Java so I'm not sure if it will help but it might. As for the TCK we do pass it (well the required tests) and my secret was to shamefully steal parts of the RxJava SerializedObserver. This is because the RabbitMQ Java Other implementations will often use a queue (blocking or nonblocking) to keep the RabbitMQ threads from overlapping and/or blocking up at the cost of a context switch. Thus for both of our implementations you really have to not block or dispatch downstream (in rxjava this would be observeOn) on a separate thread or else you might block up too many RabbitMQ threads (the ExecutorService that is passed into the connection). |
@agentgt Please let us know when time comes; A Stream Java version would be nice. |
Is this enhancement available? |
Currently, messages seem to be auto-ACKed after they are pulled off the queue and submitted downstream (https://github.com/ScalaConsultants/reactive-rabbit/blob/master/src/main/scala/io/scalac/amqp/impl/QueueSubscription.scala#L39).
Have you given any thought to incorporating a way to ACK messages after they've been fully processed by the stream? If it's something you think would be worthwhile and you have some ideas on how to implement, I can take a crack at a PR.
The text was updated successfully, but these errors were encountered: