Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement client event bus to expose the internal events #316

Closed
Gsantomaggio opened this issue Sep 26, 2023 · 2 comments
Closed

Implement client event bus to expose the internal events #316

Gsantomaggio opened this issue Sep 26, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@Gsantomaggio
Copy link
Member

Is your feature request related to a problem? Please describe.

The client does a lot of automatic operation, and the user is not aware of what is happening.
We need to give more feedback to the user.

Describe the solution you'd like

The idea is to Implement an event bus to expose the internal events.
The client makes some decisions based on what we think is the best, for example in the Producer and Consumer classes, the auto-reconnect is more or less automatic, and the user is not aware of what is happening, see #314.

The client decides what to do in case of an error for example the CRC fail on some error parse.

We could expose events and the user can react to these events, ex stop the consumer af 3 reconnections or stuff like that.

cc @ricardSiliuk @ricsiLT @jonnepmyra @TroelsL

Describe alternatives you've considered

No response

Additional context

No response

@Gsantomaggio Gsantomaggio added the enhancement New feature or request label Sep 26, 2023
@jonnepmyra
Copy link
Contributor

Thank you for taking the time to consider our diverse needs and use cases for the library 😊. I believe it's a good idea to allow customization of the client to cater to specific use case requirements.

For instance, in many of our use cases, message loss is simply not acceptable, and a failed CRC check should immediately halt the consumer, rather than proceeding with the next chunk as it currently does. However, this approach may not be suitable for all scenarios.

In the event of a 'FailedCrcCheckEvent,' here's what we'd probably want to do:

  1. Store the failed CRC event persistently, such as by incrementing the CRC failure count in Redis (e.g., IncrementCrcFailureInRedis(streamname, reference, (offset?), DateTime.UtcNow)).

  2. Disconnect the consumer.

  3. If the count of failed CRC checks for this stream/consumer/offset is less than 5 within the last hour, attempt to reconnect a new consumer at the last stored offset (before the CRC failed chunk). This assumes that the CRC failure was a transient issue, and hopefully, the chunk will pass the CRC check this time. If retries exceed 5, we will raise an alarm for our operations team.

Just want to make sure that the 'eventbus' waits for the event to be handled by our "custom eventhandler implementation" before proceeding to the next chunk. It's crucial that the consumer stops when the event is raised, preventing it from processing the next chunk (which leads to messageloss).

@Gsantomaggio
Copy link
Member Author

Closed in favour of #336

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants