Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RUMM-2672: Make Storage#writeCurrentBatch async #1094

Conversation

0xnm
Copy link
Contributor

@0xnm 0xnm commented Oct 19, 2022

What does this PR do?

This PR may look big, but essentially it does the following:

  • Storage#writeCurrentBatch implementation is now async (ConsentAwareStorage takes an executor), meaning I/O access inside this method is happening on the worker thread and callback is invoked on the worker thread as well.
  • Extracted anonymous implementation of EventBatchWriter into a dedicated class (and moved tests as well), because ConsentAwareStorage and its tests were too big already. Introduced also a no-op implementation of writer if there is no batch file or consent is not granted - this is where the most of the diff comes from.

This change is needed to make FeatureScope#withWriteContext truly async to be able to implement event write context further.

Review checklist (to be filled by reviewers)

  • Feature or bugfix MUST have appropriate tests (unit, integration, e2e)
  • Make sure you discussed the feature or bugfix with the maintaining team in an Issue
  • Make sure each commit and the PR mention the Issue number (cf the CONTRIBUTING doc)

@0xnm 0xnm requested a review from a team as a code owner October 19, 2022 12:15
@xgouchet xgouchet added the size-large This PR is large sized label Oct 19, 2022
@codecov-commenter
Copy link

codecov-commenter commented Oct 19, 2022

Codecov Report

Merging #1094 (cabffad) into feature/sdkv2 (54cf7f5) will increase coverage by 0.14%.
The diff coverage is 97.14%.

@@                Coverage Diff                @@
##           feature/sdkv2    #1094      +/-   ##
=================================================
+ Coverage          82.52%   82.66%   +0.14%     
=================================================
  Files                347      349       +2     
  Lines              11196    11229      +33     
  Branches            1838     1838              
=================================================
+ Hits                9239     9282      +43     
+ Misses              1395     1388       -7     
+ Partials             562      559       -3     
Impacted Files Coverage Δ
...adog/android/core/internal/utils/ConcurrencyExt.kt 100.00% <ø> (ø)
...atadog/android/v2/core/internal/storage/Storage.kt 0.00% <ø> (ø)
...d/v2/core/internal/storage/FileEventBatchWriter.kt 95.24% <95.24%> (ø)
...in/com/datadog/android/core/internal/SdkFeature.kt 88.18% <100.00%> (+0.11%) ⬆️
...id/v2/core/internal/storage/ConsentAwareStorage.kt 98.89% <100.00%> (+1.80%) ⬆️
...d/v2/core/internal/storage/NoOpEventBatchWriter.kt 100.00% <100.00%> (ø)
...ndroid/telemetry/internal/TelemetryEventHandler.kt 90.32% <0.00%> (-1.61%) ⬇️
...android/v2/core/internal/DatadogContextProvider.kt 82.81% <0.00%> (ø)
...rc/main/java/com/datadog/opentracing/DDTracer.java 56.49% <0.00%> (+0.42%) ⬆️
.../android/rum/internal/monitor/DatadogRumMonitor.kt 92.49% <0.00%> (+0.58%) ⬆️
... and 4 more

override fun withWriteContext(callback: (DatadogContext, EventBatchWriter) -> Unit) {
// TODO RUMM-0000 thread safety. Thread switch happens in Storage right now. Open questions:
// * what if called wants to have a sync operation, without thread switch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am thinking when will be the case/need for that. Do you have some situations in mind ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Crash reporting - we need write it as log or RUM event in a blocking manner.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

true, forgot about that.

} else {
listener.onDataWriteFailed(eventId)
FileEventBatchWriter(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are we creating a writer every time we are writing something ?

Copy link
Contributor Author

@0xnm 0xnm Oct 20, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, every time we request write context (which should be short-living), because writer is associated with a particular batch file

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Damn...I am thinking that for SR I will have to write tons of events....meaning that we will be creating those writers all the time. Aren't we afraid of too many objects there ? I mean I know they are not so heavy but was thinking that maybe if we could do it differently 🤔

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can profile it. Anyway, the number of these objects will be much less that number of snapshot/wireframes objects to write :)

@@ -16,13 +18,17 @@ import com.datadog.tools.annotation.NoOpImplementation
@NoOpImplementation
internal interface Storage {

// TODO RUMM-0000 It seems only write part is useful to be async. Having upload part as async
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do agree

@0xnm 0xnm merged commit b23b598 into feature/sdkv2 Oct 20, 2022
@0xnm 0xnm deleted the nogorodnikov/rumm-2672/make-storage-writecurrentbatch-async branch October 20, 2022 08:44
@xgouchet xgouchet added this to the 1.16.0 milestone Dec 13, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size-large This PR is large sized
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants