-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[stats] replicator scheduler crashed counter not inrcementing #810
Comments
Previously an individual failed request would be tried 10 times in a row with an exponential backoff starting at 0.25 seconds. So the intervals in seconds would be: `0.25, 0.5, 1, 2, 4, 8, 16, 32, 64, 128` For a total of about 250 seconds (or about 4 minutes). This made sense before the scheduling replicator because if a replication job had crashed in the startup phase enough times it would not be retried anymore. With a scheduling replicator, it makes more sense to stop the whole task, and let the scheduling replicatgor retry later. `retries_per_request` then becomes something used mainly for short intermettent network issues. The new retry schedule is `0.25, 0.5, 1, 2, 4` Or about 8 seconds. An additional benefit when the job is stopped quicker, the user can find out about the problem sooner from the _scheduler/docs and _scheduler/jobs status endpoints and can rectify the problem. Otherwise a single request retrying for 4 minutes would be indicated there as the job is healthy and running. Fixes apache#810
Took a look at this one. Saw the same behavior. There could be 3 reasons for not noticing the
|
Previously an individual failed request would be tried 10 times in a row with an exponential backoff starting at 0.25 seconds. So the intervals in seconds would be: `0.25, 0.5, 1, 2, 4, 8, 16, 32, 64, 128` For a total of about 250 seconds (or about 4 minutes). This made sense before the scheduling replicator because if a replication job had crashed in the startup phase enough times it would not be retried anymore. With a scheduling replicator, it makes more sense to stop the whole task, and let the scheduling replicatgor retry later. `retries_per_request` then becomes something used mainly for short intermettent network issues. The new retry schedule is `0.25, 0.5, 1, 2, 4` Or about 8 seconds. An additional benefit when the job is stopped quicker, the user can find out about the problem sooner from the _scheduler/docs and _scheduler/jobs status endpoints and can rectify the problem. Otherwise a single request retrying for 4 minutes would be indicated there as the job is healthy and running. Fixes #810
@nickva Thanks, this all makes sense. I'd like to not lose track of point 2 above. Could you file a new enhancement issue for this concept so we can add it to the backlog? |
Previously an individual failed request would be tried 10 times in a row with an exponential backoff starting at 0.25 seconds. So the intervals in seconds would be: `0.25, 0.5, 1, 2, 4, 8, 16, 32, 64, 128` For a total of about 250 seconds (or about 4 minutes). This made sense before the scheduling replicator because if a replication job had crashed in the startup phase enough times it would not be retried anymore. With a scheduling replicator, it makes more sense to stop the whole task, and let the scheduling replicatgor retry later. `retries_per_request` then becomes something used mainly for short intermettent network issues. The new retry schedule is `0.25, 0.5, 1, 2, 4` Or about 8 seconds. An additional benefit when the job is stopped quicker, the user can find out about the problem sooner from the _scheduler/docs and _scheduler/jobs status endpoints and can rectify the problem. Otherwise a single request retrying for 4 minutes would be indicated there as the job is healthy and running. Fixes #810
Previously an individual failed request would be tried 10 times in a row with an exponential backoff starting at 0.25 seconds. So the intervals in seconds would be: `0.25, 0.5, 1, 2, 4, 8, 16, 32, 64, 128` For a total of about 250 seconds (or about 4 minutes). This made sense before the scheduling replicator because if a replication job had crashed in the startup phase enough times it would not be retried anymore. With a scheduling replicator, it makes more sense to stop the whole task, and let the scheduling replicatgor retry later. `retries_per_request` then becomes something used mainly for short intermettent network issues. The new retry schedule is `0.25, 0.5, 1, 2, 4` Or about 8 seconds. An additional benefit when the job is stopped quicker, the user can find out about the problem sooner from the _scheduler/docs and _scheduler/jobs status endpoints and can rectify the problem. Otherwise a single request retrying for 4 minutes would be indicated there as the job is healthy and running. Fixes apache#810
The new value is 5 but used to be 10, which makes more sense with the new scheduling replicator behavior. Issue #810
The new value is 5 but used to be 10, which makes more sense with the new scheduling replicator behavior. Issue apache#810
I set up a _replicator document with an incorrect URL in it, and let the replication scheduler attempt to access it a few times in a row.
Repeated fetches of
/_node/couchdb@localhost/_stats
showed thatcouch_replicator.jobs.crashed.value
stayed at0
and never incremented.Steps to Reproduce (for bugs)
Create a
_replicator
document of this form:and let it cycle through a few
nxdomain
errors (as evidenced bycouch.log
).Context
Trying to put together an open source CouchDB monitoring solution, and failing to get useful stats out of the new scheduler.
Your Environment
n/a
The text was updated successfully, but these errors were encountered: