Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(logging): enable log rotation and set retry on full log store sync #3699

Conversation

NikaHsn
Copy link
Member

@NikaHsn NikaHsn commented Sep 6, 2023

Issue #, if available:

Description of changes:

  • make QueuedItemStore.isFull() sync
  • enable log rotation when log store is full
  • set retry on full log store sync
  • handle rejectedLogEventsInfo response and do partial sync

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@NikaHsn NikaHsn requested a review from a team as a code owner September 6, 2023 00:17
Comment on lines 47 to 48
final db = _database;
await db.deleteItems(items);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here and throughout

Suggested change
final db = _database;
await db.deleteItems(items);
await _database.deleteItems(items);

final openRequest = indexedDB!.open('test', 1);
await openRequest.future;
indexedDB!.open('test', 1).result;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you explain this change?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

QueuedItemStore.isFull() is a sync method. however it was implemented async because the web implementation calls checkIsIndexedDBSupported to either use indexedDB or InMemoryQueuedItemStore. Because the checkIsIndexedDBSupported was async all the web APIs had to be async.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But why would this be guaranteed to throw in the same way as before?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

based on the docs If the operation is successful, the value of result is a connection to the database. If the request failed and the result is not available, an InvalidStateError exception is thrown.

logs.removeRange(0, expiredLogEventEndIndex + 1);
// set events to start from next
events.removeRange(0, expiredLogEventEndIndex + 1);
continue;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why can't multiple of these blocks apply?

final tooNewLogEventStartIndex =
rejectedLogEventsInfo?.tooNewLogEventStartIndex;

if (expiredLogEventEndIndex != null &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you extract this logic into a function so it's not duplicated 3 times?

if (tooNewLogEventStartIndex != null &&
tooNewLogEventStartIndex >= 0 &&
tooNewLogEventStartIndex <= events.length - 1) {
tooNewException = _TooNewLogEventException(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not throw it here? The control flow would be easier to follow

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we want to continue with syncing the current batch after sanitizing the events in the current batch and then throw to stop syncing next batches

_syncing = false;
}
}
}

void _handleFullLogStoreAfterSync({
int retryTimeInMillisecondsSinceEpoch = 0,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you change this logic to work with DateTime and Duration objects instead of integers? This is very opaque and hard to follow

logs.removeRange(tooNewLogEventStartIndex, logs.length);
// set events to end at the index
events.removeRange(tooNewLogEventStartIndex, events.length);
continue;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why continue if all batches after this will be newer?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we want to continue with syncing the current batch after sanitizing the events in the current batch and then throw to stop syncing next batches

final openRequest = indexedDB!.open('test', 1);
await openRequest.future;
indexedDB!.open('test', 1).result;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But why would this be guaranteed to throw in the same way as before?

int? pastEndIndex;
int? futureStartIndex;

if (_isValidIndex(tooOldLogEventEndIndex, length)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this allowed to be false? I would use RangeError.checkValidIndex to assert these are valid.

final (tooOldEndIndex, tooNewStartIndex) =
rejectedLogEventsInfo.parse(events.length);

if (_isValidIndex(tooNewStartIndex, events.length)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You've already checked this

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

RejectedLogEventsInfo.parse() validates RejectedLogEventsInfo indices.
here is validating the indices returned by parse() method.

await _logStore.deleteItems(logs);
break;
}
{
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove these braces; they're unnecessary

// set events to start after the index.
events.removeRange(0, tooOldEndIndex + 1);
}
continue;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can remove continue now

_logStore.isFull(_pluginConfig.localStoreMaxSizeInMB);
if (!isLogStoreFull) {
_retryCount = 0;
_retryTime = null;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why would you not respect retryTime here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it is reseting retry if log store is not full. it retries to sync only if log store is full otherwise can wait till next sync.

return;
}
_retryCount += 1;
_retryTime = DateTime.timestamp().add((_baseRetryInterval * _retryCount));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a long time for a basic backoff

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

flushIntervalInSeconds = const Duration(seconds: 60). updated the _baseRetryInterval to 10 secs so there will be 2-3 retries within the flush interval.

// after sending each batch to CloudWatch check if the batch has
// `tooNewException` and throw to stop syncing next batches.
if (tooNewException != null) {
throw tooNewException;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this would be cleaner if you just returned the retry time instead of throwing

@NikaHsn NikaHsn merged commit eda0a6a into aws-amplify:feat/logging/cloudwatch Sep 12, 2023
21 checks passed
@NikaHsn NikaHsn deleted the feat/logging/enable-log-rotation branch October 13, 2023 21:34
khatruong2009 pushed a commit that referenced this pull request Nov 27, 2023
…nc (#3699)

* feat(logging): enable log rotation and set retry
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants