-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(logging): enable log rotation and set retry on full log store sync #3699
feat(logging): enable log rotation and set retry on full log store sync #3699
Conversation
final db = _database; | ||
await db.deleteItems(items); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here and throughout
final db = _database; | |
await db.deleteItems(items); | |
await _database.deleteItems(items); |
final openRequest = indexedDB!.open('test', 1); | ||
await openRequest.future; | ||
indexedDB!.open('test', 1).result; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain this change?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
QueuedItemStore.isFull()
is a sync method. however it was implemented async because the web implementation calls checkIsIndexedDBSupported
to either use indexedDB or InMemoryQueuedItemStore. Because the checkIsIndexedDBSupported
was async all the web APIs had to be async.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But why would this be guaranteed to throw in the same way as before?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
based on the docs If the operation is successful, the value of result is a connection to the database. If the request failed and the result is not available, an InvalidStateError exception is thrown.
logs.removeRange(0, expiredLogEventEndIndex + 1); | ||
// set events to start from next | ||
events.removeRange(0, expiredLogEventEndIndex + 1); | ||
continue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why can't multiple of these blocks apply?
final tooNewLogEventStartIndex = | ||
rejectedLogEventsInfo?.tooNewLogEventStartIndex; | ||
|
||
if (expiredLogEventEndIndex != null && |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you extract this logic into a function so it's not duplicated 3 times?
if (tooNewLogEventStartIndex != null && | ||
tooNewLogEventStartIndex >= 0 && | ||
tooNewLogEventStartIndex <= events.length - 1) { | ||
tooNewException = _TooNewLogEventException( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not throw it here? The control flow would be easier to follow
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we want to continue with syncing the current batch after sanitizing the events in the current batch and then throw to stop syncing next batches
_syncing = false; | ||
} | ||
} | ||
} | ||
|
||
void _handleFullLogStoreAfterSync({ | ||
int retryTimeInMillisecondsSinceEpoch = 0, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you change this logic to work with DateTime
and Duration
objects instead of integers? This is very opaque and hard to follow
logs.removeRange(tooNewLogEventStartIndex, logs.length); | ||
// set events to end at the index | ||
events.removeRange(tooNewLogEventStartIndex, events.length); | ||
continue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why continue if all batches after this will be newer?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we want to continue with syncing the current batch after sanitizing the events in the current batch and then throw to stop syncing next batches
final openRequest = indexedDB!.open('test', 1); | ||
await openRequest.future; | ||
indexedDB!.open('test', 1).result; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But why would this be guaranteed to throw in the same way as before?
int? pastEndIndex; | ||
int? futureStartIndex; | ||
|
||
if (_isValidIndex(tooOldLogEventEndIndex, length)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this allowed to be false? I would use RangeError.checkValidIndex
to assert these are valid.
final (tooOldEndIndex, tooNewStartIndex) = | ||
rejectedLogEventsInfo.parse(events.length); | ||
|
||
if (_isValidIndex(tooNewStartIndex, events.length)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You've already checked this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
RejectedLogEventsInfo.parse()
validates RejectedLogEventsInfo indices.
here is validating the indices returned by parse() method.
await _logStore.deleteItems(logs); | ||
break; | ||
} | ||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove these braces; they're unnecessary
// set events to start after the index. | ||
events.removeRange(0, tooOldEndIndex + 1); | ||
} | ||
continue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can remove continue
now
_logStore.isFull(_pluginConfig.localStoreMaxSizeInMB); | ||
if (!isLogStoreFull) { | ||
_retryCount = 0; | ||
_retryTime = null; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would you not respect retryTime
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it is reseting retry if log store is not full. it retries to sync only if log store is full otherwise can wait till next sync.
return; | ||
} | ||
_retryCount += 1; | ||
_retryTime = DateTime.timestamp().add((_baseRetryInterval * _retryCount)); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a long time for a basic backoff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
flushIntervalInSeconds = const Duration(seconds: 60)
. updated the _baseRetryInterval
to 10 secs so there will be 2-3 retries within the flush interval.
// after sending each batch to CloudWatch check if the batch has | ||
// `tooNewException` and throw to stop syncing next batches. | ||
if (tooNewException != null) { | ||
throw tooNewException; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this would be cleaner if you just returned the retry time instead of throwing
eda0a6a
into
aws-amplify:feat/logging/cloudwatch
…nc (#3699) * feat(logging): enable log rotation and set retry
Issue #, if available:
Description of changes:
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.