feat(new sink): Initial rabbitmq sink implementation#1376
feat(new sink): Initial rabbitmq sink implementation#1376
rabbitmq sink implementation#1376Conversation
Signed-off-by: AlyHKafoury <aly.kafoury@gmail.com>
Signed-off-by: Ashley Jeffs <ash@jeffail.uk>
|
Nice! We definitely need to resolve both of those issues. @LucioFranco do you mind chiming on the best way to do that? I feel like you have the best understanding of the underlying networking code.
This should currently be the default for all of our HTTP sinks. |
| fn new(config: RabbitMQSinkConfig, acker: Acker) -> crate::Result<Self> { | ||
| let channel = Client::connect(&config.uri, config.connection_properties()) | ||
| .and_then(|client| client.create_channel()) | ||
| .wait()?; |
There was a problem hiding this comment.
This seems surprising that this works, I would assume it needs to do some io?
| Ok(Async::Ready(Some(((), seqno)))) => { | ||
| if self.pending_acks.remove(&seqno) { | ||
| self.acker.ack(1); | ||
| trace!("published message to rabbitmq"); |
There was a problem hiding this comment.
might make sense to add the seqno to these logs, you can use a span to do that.
|
Intention: This is very relevant to all streaming types of sink we might add in the future, so we ought to work out a solution here that we're happy to reuse. Correct behavior regarding connection loss is to indefinitely attempt to re-establish it, but we need to be sure we don't block shutdown or any other mechanisms. Regarding failed message sends at a minimum we need to reattempt the message indefinitely (assuming the failure is temporary) and it makes sense to attempt this within the same mechanism as the connection recovery. |
|
Ok, I took a long deep look at the That said, we should move forward with the current library, but we should change how we implement the sink. I suggest that we drop using the So what I suggest is this:
This is just a high-level view of how I would go about implementing reconnect, of course, we could continue with the current implementation but we would repeat a lot of code that we have already implemented and tested. |
|
Nothing, this has been assigned to @LucioFranco to complete, although this is not high priority at the moment. Once @LucioFranco is done with the HTTP sink work it is worth considering this next. |
|
Closing this for now since there are some substantial changes that we need to make to get this merged. We'll reopen when we get more user demand. |
Supersedes #1078
I've refactored the config fields but there are two remaining problems I think we ought to discuss:
If the connection is lost the library we're using doesn't handle background reconnects like the librdkafka does. This means we need to implement our own mechanism, otherwise users will be forced to restart the service.
When it's known at poll time that an event has failed to send we need to ensure that it is reattempted indefinitely. This ties into Implement end-to-end record acknowledgement #1107.