-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RUMM-2186: Write batch metadata #943
Conversation
@@ -76,9 +80,28 @@ internal open class BatchFileDataWriter<T : Any>( | |||
|
|||
@WorkerThread | |||
private fun writeData(byteArray: ByteArray): Boolean { | |||
val file = fileOrchestrator.getWritableFile(byteArray.size) ?: return false | |||
if (!checkEventSize(byteArray.size)) return false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Means that this event will be lost ? I wonder if this will create problems for SessionReplay records 🤔 , I guess they are not that big though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, it will be lost. But this behavior is not different from the current one. Currently if item is too big we won't provide any file for writing it
Lines 48 to 57 in e5ddff4
if (dataSize > config.maxItemSize) { | |
internalLogger.errorWithTelemetry( | |
ERROR_LARGE_DATA.format( | |
Locale.US, | |
dataSize, | |
config.maxItemSize | |
) | |
) | |
return null | |
} |
// This file may go away after rework, not adding tests for now (it is a simplified copy of | ||
// BatchFileHandler) | ||
@Suppress("unused") | ||
internal class PlainFileHandler( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't be PlainSingleItemFileHandler
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably yes, but it will be a subject of RUMM-2235 anyway.
|
||
internal open class SingleItemDataWriter<T : Any>( | ||
internal val fileOrchestrator: FileOrchestrator, | ||
internal val serializer: Serializer<T>, | ||
internal val handler: FileHandler, | ||
internal val internalLogger: Logger | ||
internal val handler: ChunkedFileHandler, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
shouldn't be a SingleItemFileHandler
here ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, but it will be a subject of RUMM-2235 anyway. Currently the only implementation of ChunkedFileHandler
is BatchFileHandler
which will write data in TLV format adding meta as well, which are (both meta and TLV) not really needed for the use-cases of SingleItemDataWriter
which are NdkUserInfoDataWriter
and NdkNetworkInfoDataWriter
(since we just want to store plain json), but these cases will work with BatchFileHandler
as well.
// prevent useless operation for empty event / null orchestrator | ||
if (event.isEmpty() || orchestrator == null) { | ||
listener.onDataWritten(eventId) | ||
return | ||
} | ||
|
||
val file = orchestrator.getWritableFile(event.size) | ||
if (!checkEventSize(event.size)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this already checked in the FileHandler
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was moved from FileOrchestrator
to the DataWriter
, but DataWriter
is not used by the ConsentAwareStorage
right now (and probably won't, in fact in the writer callback we need only Serializer
, so logically Storage
replaces DataWriter
), so adding this check here as well.
What does this PR do?
This change adds support for reading/writing optional batch metadata (in a non-typed format, currently as
ByteArray
).Batch metadata file will be written to the batch data file and will be associated with it by the name (metadata file will have the same name as batch file, just with
_metadata
suffix).This change also does the following:
FileOrchestrator
to prevent the possibility of the switching to the next batch file if current batch size + event size exceeds the limit (we can do this, because the limit set in the SDK is still less than the actual limit of the intake(s) and the margin we have is still bigger than maximum event size). This makes us to keep batch metadata file and batch file in sync for the particularBatchWriter
(otherwise we could have a metadata file from another batch). Responsibility of checking event size is now shifted to theWriter
layer.Review checklist (to be filled by reviewers)