You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Mar 31, 2021. It is now read-only.
my understanding after going through logs and code:
lookback range for scheduler is controlled by buckets.backlog.check.limit(set to 30 for testing).
Once bucket loader is initialised and specified number of buckets are loaded.
subsequent tries will be made from cache(fun getProcessableShardsForOrBefore).
which is based on this calculation "buckets.keys.sorted().take(buckets.size - maxBuckets)" (fun ifPurgeNeeded())
Scenario 1:
In case where events are not there in bucket, bucket will be purged successfully and removed from map.
scenario 2:
Bigben is not able to process the bucket,that bucket will be still there along with new addition.
but it will only be scheduled based on lookback range.
consider a case where because of some reason buckets are going to error state.
what will be the max number of buckets in map and when will it be removed from map in case it is always failing.
my understanding after going through logs and code:
lookback range for scheduler is controlled by buckets.backlog.check.limit(set to 30 for testing).
Once bucket loader is initialised and specified number of buckets are loaded.
subsequent tries will be made from cache(fun getProcessableShardsForOrBefore).
which is based on this calculation "buckets.keys.sorted().take(buckets.size - maxBuckets)" (fun ifPurgeNeeded())
Scenario 1:
In case where events are not there in bucket, bucket will be purged successfully and removed from map.
scenario 2:
Bigben is not able to process the bucket,that bucket will be still there along with new addition.
but it will only be scheduled based on lookback range.
consider a case where because of some reason buckets are going to error state.
what will be the max number of buckets in map and when will it be removed from map in case it is always failing.
sample logs where buckets size went up to 253 but cannot be purged because of failed buckets(buckets = ConcurrentHashMap<ZonedDateTime, BucketSnapshot>()):
{2019-03-13T03:21Z=BucketSnapshot(id=2019-03-13T03:21Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:30Z=BucketSnapshot(id=2019-03-13T03:30Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:45Z=BucketSnapshot(id=2019-03-13T03:45Z, count=0, processing={}, awaiting={}), 2019-03-13T01:39Z=BucketSnapshot(id=2019-03-13T01:39Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:24Z=BucketSnapshot(id=2019-03-13T02:24Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:24Z=BucketSnapshot(id=2019-03-13T03:24Z, count=0, processing={}, awaiting={}), 2019-03-13T03:27Z=BucketSnapshot(id=2019-03-13T03:27Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:36Z=BucketSnapshot(id=2019-03-13T03:36Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:39Z=BucketSnapshot(id=2019-03-13T03:39Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:48Z=BucketSnapshot(id=2019-03-13T03:48Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:15Z=BucketSnapshot(id=2019-03-13T03:15Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:39Z=BucketSnapshot(id=2019-03-13T02:39Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:18Z=BucketSnapshot(id=2019-03-13T03:18Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:33Z=BucketSnapshot(id=2019-03-13T03:33Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:42Z=BucketSnapshot(id=2019-03-13T03:42Z, count=1, processing={0}, awaiting={}), 2019-03-13T01:41Z=BucketSnapshot(id=2019-03-13T01:41Z, count=1, processing={0}, awaiting={}), 2019-03-13T01:40Z=BucketSnapshot(id=2019-03-13T01:40Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:19Z=BucketSnapshot(id=2019-03-13T03:19Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:28Z=BucketSnapshot(id=2019-03-13T03:28Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:37Z=BucketSnapshot(id=2019-03-13T03:37Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:46Z=BucketSnapshot(id=2019-03-13T03:46Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:49Z=BucketSnapshot(id=2019-03-13T03:49Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:38Z=BucketSnapshot(id=2019-03-13T02:38Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:39Z=BucketSnapshot(id=2019-03-12T22:39Z, count=334, processing={0}, awaiting={}), 2019-03-13T03:50Z=BucketSnapshot(id=2019-03-13T03:50Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:41Z=BucketSnapshot(id=2019-03-13T03:41Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:25Z=BucketSnapshot(id=2019-03-13T02:25Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:23Z=BucketSnapshot(id=2019-03-13T03:23Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:14Z=BucketSnapshot(id=2019-03-13T03:14Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:32Z=BucketSnapshot(id=2019-03-13T03:32Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:34Z=BucketSnapshot(id=2019-03-13T02:34Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:38Z=BucketSnapshot(id=2019-03-12T22:38Z, count=61, processing={0}, awaiting={}), 2019-03-13T03:35Z=BucketSnapshot(id=2019-03-13T03:35Z, count=1, processing={0}, awaiting={}), 2019-03-13T01:44Z=BucketSnapshot(id=2019-03-13T01:44Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:17Z=BucketSnapshot(id=2019-03-13T02:17Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:44Z=BucketSnapshot(id=2019-03-13T03:44Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:20Z=BucketSnapshot(id=2019-03-13T03:20Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:26Z=BucketSnapshot(id=2019-03-13T03:26Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:29Z=BucketSnapshot(id=2019-03-13T03:29Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:26Z=BucketSnapshot(id=2019-03-13T02:26Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:35Z=BucketSnapshot(id=2019-03-13T02:35Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:22Z=BucketSnapshot(id=2019-03-13T03:22Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:31Z=BucketSnapshot(id=2019-03-13T03:31Z, count=0, processing={}, awaiting={}), 2019-03-13T03:40Z=BucketSnapshot(id=2019-03-13T03:40Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:44Z=BucketSnapshot(id=2019-03-12T22:44Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:47Z=BucketSnapshot(id=2019-03-13T03:47Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:46Z=BucketSnapshot(id=2019-03-12T22:46Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:38Z=BucketSnapshot(id=2019-03-13T03:38Z, count=0, processing={}, awaiting={}), 2019-03-13T01:43Z=BucketSnapshot(id=2019-03-13T01:43Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:19Z=BucketSnapshot(id=2019-03-13T02:19Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:40Z=BucketSnapshot(id=2019-03-13T02:40Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:16Z=BucketSnapshot(id=2019-03-13T03:16Z, count=1, processing={0}, awaiting={}), 2019-03-13T01:38Z=BucketSnapshot(id=2019-03-13T01:38Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:41Z=BucketSnapshot(id=2019-03-13T02:41Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:45Z=BucketSnapshot(id=2019-03-12T22:45Z, count=1, processing={0}, awaiting={}), 2019-03-13T02:54Z=BucketSnapshot(id=2019-03-13T02:54Z, count=1, processing={0}, awaiting={}), 2019-03-13T00:49Z=BucketSnapshot(id=2019-03-13T00:49Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:25Z=BucketSnapshot(id=2019-03-13T03:25Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:34Z=BucketSnapshot(id=2019-03-13T03:34Z, count=1, processing={0}, awaiting={}), 2019-03-13T03:43Z=BucketSnapshot(id=2019-03-13T03:43Z, count=1, processing={0}, awaiting={}), 2019-03-12T22:41Z=BucketSnapshot(id=2019-03-12T22:41Z, count=213, processing={0}, awaiting={})}
The text was updated successfully, but these errors were encountered: