Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exponential Message Delivery Backoff #2042

Closed
andreib1 opened this issue Mar 27, 2021 · 6 comments
Closed

Exponential Message Delivery Backoff #2042

andreib1 opened this issue Mar 27, 2021 · 6 comments
Assignees

Comments

@andreib1
Copy link

andreib1 commented Mar 27, 2021

Feature Request

Use Case:

As a micro-service developer
I would like to be able to provide a back-off policy for Jetstream message delivery
So that if a message is NAKed, and a prerequisite for processing is not available, it can back off gracefully before retrying.

Proposed Change:

  • Add BackOffPolicy to ConsumerConfig with values including ExponentialBackoffPolicy.
  • Add BackOffMaximum to ConsumerConfig with maximum back-off duration value.
  • Prevent message re-delivery from Jetstream to the Consumer until at least the current back-off duration.

A precedent for this pattern can be found in this implementation of PubSub
And exploration here

A useful 'stretch goal' for this feature would be to include msg.Defer(d time.Duration) which would allow an application to NAK and manually defer message re-delivery. It is my assumption this would be mutually exclusive with setting BackOffPolicy for simplicity.

Who Benefits From The Change(s)?

Any Jetstream consumer that uses message delivery as a mechanism to trigger the next stage of a workflow, where there is an external dependency that may not be met yet.

Any message receiver that may experience a transient failure such as a downstream service temporarily not being available.

Any administrator attempting to reduce network traffic resulting from NAKed messages.

Alternative Approaches

It is possible to engineer (at least in go) a goroutine that loops for a duration, sending msg.InProgress at regular intervals. This hold would collapse if the client application terminates. The poorer option would be to implement this functionality within the go client as a standard function.

@ripienaar
Copy link
Contributor

I think this is a good idea, definitely something to look to add in time.

@andreib1
Copy link
Author

@ripienaar @derekcollison As there is a bit of interest and momentum behind exponential back-off / embargoed messages is it worth revisiting this? If it is already on the roadmap, then an update for how far into the future this is planned would be much appreciated. In the meantime, I created a temporary solution for a customer using Redis to hold messages that are embargoed / backing off, and then they are pulled according to their key. Perhaps K/V could be used in the same way under the hood. I fully understand that these features could possibly be quite a bit of work, and I don't want to seem pushy or ungrateful.

@derekcollison
Copy link
Member

We have plans for exponential back-off on re-deliveries.

No definitive plans for embargoes atm..

@andreib1
Copy link
Author

That's brilliant news, thank you :)

@kozlovic
Copy link
Member

This was addressed in v2.7.1: https://github.com/nats-io/nats-server/releases/tag/v2.7.1:

* Added
- JetStream:
Support for a delay when Nak'ing a message (https://github.com/nats-io/nats-server/pull/2812)
Support for a backoff list of times in the consumer configuration (https://github.com/nats-io/nats-server/pull/2812)

@8th-block
Copy link

8th-block commented Feb 26, 2024

How does it work when we use backoff list in the consumer config? nak() seems to ignore that property and nak with delay requires specifying the delay manually

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants