Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement exponential backoff for the retry strategy #620

Merged
merged 5 commits into from
Aug 26, 2021

Conversation

brenuart
Copy link
Collaborator

The initial retries start very fast (a few nanoseconds pauses) and slow down up to the configure appendRetryTimeout.
In addition, limit retry to a single concurrent thread to preserve CPU in constraint environments.
These changes should give a better throughput with an acceptable latency when the queue is full.

Closes #619

brenuart and others added 5 commits August 24, 2021 12:07
The initial retries start very fast (a few nanoseconds pauses) and slow down up to the configure appendRetryTimeout.
In addition, limit retry to a single concurrent thread to preserve CPU in constraint environments.
These changes should give a better througput when the queue is full.
@brenuart brenuart merged commit 0e53df5 into main Aug 26, 2021
@brenuart brenuart deleted the gh619-async-retry-perf branch August 26, 2021 12:43
@philsttr philsttr added this to the 7.0 milestone Aug 28, 2021
@@ -259,13 +260,18 @@
* Delay between consecutive attempts to append an event in the ring buffer when
* full.
*/
private Duration appendRetryFrequency = Duration.buildByMilliseconds(50);
private Duration appendRetryFrequency = Duration.buildByMilliseconds(5);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

5ms seems really fast.

What is the reasoning behind this value?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Low async throughput under heavy load when appender is configured to *not* drop events
2 participants