-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize SDK initialization when requests executed before any activity starts #1204
Changes from all commits
a02e52f
2cb7961
6aa761f
c87689b
6015d47
088de84
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -61,11 +61,12 @@ internal class OfferingsManager( | |
onError: ((PurchasesError) -> Unit)? = null, | ||
onSuccess: ((Offerings) -> Unit)? = null, | ||
) { | ||
offeringsCache.setOfferingsCacheTimestampToNow() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I don't recall exactly why this was here before 🤔 But I remember there was a reason Brain dump: Before your changes, if there was another request to fetch offerings, the request considers the cache to be recent and it would return that outdated offerings cache instead, but that's incorrect right? I think that's what you optimized. Also, if there was an issue getting the offerings, we would clear that newly set timestamp, and the following request would retry, with your changes, the behavior is the same Anyways, I think the optimization makes sense, I am just trying to remember why this was done that way lol There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Mostly yes. However, the scenario I was trying to fix was the following:
That's correct, and that remains that way for now, though it probably wouldn't be needed now. I've just made it so marking the offering cache timestamp happens only after a success. The main consequence of this change is that multiple offerings request may happen before we receive a successful response, however, we rely on the request callback cache in the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We had a recent conversation about this on iOS as well. Using the "set timestamp to now" workaround is effective but definitely not intuitive and confusing for maintainers. I think the ultimate solution to it is to have 2 values - "last updated" and "is currently updating", and use the combination of those 2 as a more expressive way of determining state. We're currently encoding both of those into a single boolean value. The cache key came later, though, so maybe it removes the need for that workaround entirely? Does it de-duplicate requests with the same key? If so it effectively removes the need for the "set timestamp to now" workaround and even having an "is currently updating" value as well, right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
Yes, the callback cache mechanism deduplicates requests while the request "is currently updating". However, we still need to store the last time the cache was updated to know whether it's stale, so we still need to store the timestamp on a success.
Yes, iOS did a similar change not too long ago and it relies on the request cache as well. I think that makes the most sense to handle deduplicating requests while it's in progress. |
||
backend.getOfferings( | ||
appUserID, | ||
appInBackground, | ||
{ createAndCacheOfferings(it, onError, onSuccess) }, | ||
{ | ||
createAndCacheOfferings(it, onError, onSuccess) | ||
}, | ||
{ backendError, isServerError -> | ||
if (isServerError) { | ||
val cachedOfferingsResponse = offeringsCache.cachedOfferingsResponse | ||
|
@@ -94,6 +95,7 @@ internal class OfferingsManager( | |
}, | ||
onSuccess = { offerings -> | ||
offeringsCache.cacheOfferings(offerings, offeringsJSON) | ||
offeringsCache.setOfferingsCacheTimestampToNow() | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is another optimization I had to do... If we left it where it was before, we wouldn't use the new mechanisms to optimize the requests trying to use the request that is supposed to finish earlier. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I wonder if these two lines should just be a part of cacheOfferings in offeringsCache and guarantee atomicity there There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Feels like it'd be conceptually cleaner and slightly easier to make it synchronized? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Certainly, these 2 could be joined (this is the only place they are both called) I will do so in a different PR. |
||
dispatch { | ||
onSuccess?.invoke(offerings) | ||
} | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -3,6 +3,7 @@ package com.revenuecat.purchases.subscriberattributes | |
import com.revenuecat.purchases.PurchasesError | ||
import com.revenuecat.purchases.PurchasesErrorCode | ||
import com.revenuecat.purchases.common.BackendHelper | ||
import com.revenuecat.purchases.common.Delay | ||
import com.revenuecat.purchases.common.SubscriberAttributeError | ||
import com.revenuecat.purchases.common.networking.Endpoint | ||
import com.revenuecat.purchases.common.networking.RCHTTPStatusCodes | ||
|
@@ -25,6 +26,7 @@ internal class SubscriberAttributesPoster( | |
Endpoint.PostAttributes(appUserID), | ||
mapOf("attributes" to attributes), | ||
postFieldsToSign = null, | ||
delay = Delay.DEFAULT, | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is another change that I thought was fine but lmk if you have any concerns... Basically, this will add jittering to the subscriber attributes requests, which didn't exist before. I don't think it's critical so it made sense to me to delay it in favor of other requests. |
||
{ error -> | ||
onErrorHandler(error, false, emptyList()) | ||
}, | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -34,6 +34,7 @@ import io.mockk.every | |
import io.mockk.mockk | ||
import io.mockk.mockkObject | ||
import io.mockk.slot | ||
import io.mockk.spyk | ||
import io.mockk.unmockkObject | ||
import io.mockk.verify | ||
import org.assertj.core.api.Assertions.assertThat | ||
|
@@ -87,7 +88,7 @@ class BackendTest { | |
every { diagnosticsURL } returns mockDiagnosticsBaseURL | ||
every { customEntitlementComputation } returns false | ||
} | ||
private val dispatcher = SyncDispatcher() | ||
private val dispatcher = spyk(SyncDispatcher()) | ||
private val backendHelper = BackendHelper(API_KEY, dispatcher, mockAppConfig, mockClient) | ||
private var backend: Backend = Backend( | ||
mockAppConfig, | ||
|
@@ -96,15 +97,15 @@ class BackendTest { | |
mockClient, | ||
backendHelper | ||
) | ||
private val asyncDispatcher = Dispatcher( | ||
private val asyncDispatcher = spyk(Dispatcher( | ||
ThreadPoolExecutor( | ||
1, | ||
2, | ||
0, | ||
TimeUnit.MILLISECONDS, | ||
LinkedBlockingQueue() | ||
) | ||
) | ||
)) | ||
private var asyncBackendHelper: BackendHelper = BackendHelper(API_KEY, asyncDispatcher, mockAppConfig, mockClient) | ||
private var asyncBackend: Backend = Backend( | ||
mockAppConfig, | ||
|
@@ -335,6 +336,60 @@ class BackendTest { | |
} | ||
} | ||
|
||
@Test | ||
fun `given getCustomerInfo call on foreground, then one in background, only one request without delay is triggered`() { | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. we have to do this because it'd be tricky to cancel the existing request for the background, right? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Well, in this case I think it makes sense to reuse the existing callback cache, since we would get the result earlier without the jittering. On the opposite side though (first request in background, then one in foreground), ideally we would cancel the background request, but cancelling exising requests would be very tricky yeah. However, since I think this case shouldn't be that common, I think it's better to just leave the request even if it's duplicated. |
||
mockResponse( | ||
Endpoint.GetCustomerInfo(appUserID), | ||
null, | ||
200, | ||
null, | ||
null, | ||
true | ||
) | ||
val lock = CountDownLatch(2) | ||
asyncBackend.getCustomerInfo(appUserID, appInBackground = false, onSuccess = { | ||
lock.countDown() | ||
}, onError = onReceiveCustomerInfoErrorHandler) | ||
asyncBackend.getCustomerInfo(appUserID, appInBackground = true, onSuccess = { | ||
lock.countDown() | ||
}, onError = onReceiveCustomerInfoErrorHandler) | ||
lock.await(defaultTimeout, TimeUnit.MILLISECONDS) | ||
assertThat(lock.count).isEqualTo(0) | ||
verify(exactly = 1) { | ||
asyncDispatcher.enqueue(any(), Delay.NONE) | ||
} | ||
} | ||
|
||
@Test | ||
fun `given getCustomerInfo call on background, then one in foreground, both are executed`() { | ||
mockResponse( | ||
Endpoint.GetCustomerInfo(appUserID), | ||
null, | ||
200, | ||
null, | ||
null, | ||
true | ||
) | ||
val lock = CountDownLatch(2) | ||
asyncBackend.getCustomerInfo(appUserID, appInBackground = true, onSuccess = { | ||
lock.countDown() | ||
}, onError = onReceiveCustomerInfoErrorHandler) | ||
asyncBackend.getCustomerInfo(appUserID, appInBackground = false, onSuccess = { | ||
lock.countDown() | ||
}, onError = onReceiveCustomerInfoErrorHandler) | ||
lock.await(defaultTimeout, TimeUnit.MILLISECONDS) | ||
assertThat(lock.count).isEqualTo(0) | ||
verify(exactly = 2) { | ||
mockClient.performRequest( | ||
mockBaseURL, | ||
Endpoint.GetCustomerInfo(appUserID), | ||
body = null, | ||
postFieldsToSign = null, | ||
any() | ||
) | ||
} | ||
} | ||
|
||
@Test | ||
fun `customer info call is enqueued with delay if on background`() { | ||
dispatcher.calledDelay = null | ||
|
@@ -1322,6 +1377,60 @@ class BackendTest { | |
} | ||
} | ||
|
||
@Test | ||
fun `given getOfferings call on foreground, then one in background, only one request without delay is triggered`() { | ||
mockResponse( | ||
Endpoint.GetOfferings(appUserID), | ||
null, | ||
200, | ||
null, | ||
noOfferingsResponse, | ||
true | ||
) | ||
val lock = CountDownLatch(2) | ||
asyncBackend.getOfferings(appUserID, appInBackground = false, onSuccess = { | ||
lock.countDown() | ||
}, onError = onReceiveOfferingsErrorHandler) | ||
asyncBackend.getOfferings(appUserID, appInBackground = true, onSuccess = { | ||
lock.countDown() | ||
}, onError = onReceiveOfferingsErrorHandler) | ||
lock.await(defaultTimeout, TimeUnit.MILLISECONDS) | ||
assertThat(lock.count).isEqualTo(0) | ||
verify(exactly = 1) { | ||
asyncDispatcher.enqueue(any(), Delay.NONE) | ||
} | ||
} | ||
|
||
@Test | ||
fun `given getOfferings call on background, then one in foreground, both are executed`() { | ||
mockResponse( | ||
Endpoint.GetOfferings(appUserID), | ||
null, | ||
200, | ||
null, | ||
noOfferingsResponse, | ||
true | ||
) | ||
val lock = CountDownLatch(2) | ||
asyncBackend.getOfferings(appUserID, appInBackground = true, onSuccess = { | ||
lock.countDown() | ||
}, onError = onReceiveOfferingsErrorHandler) | ||
asyncBackend.getOfferings(appUserID, appInBackground = false, onSuccess = { | ||
lock.countDown() | ||
}, onError = onReceiveOfferingsErrorHandler) | ||
lock.await(defaultTimeout, TimeUnit.MILLISECONDS) | ||
assertThat(lock.count).isEqualTo(0) | ||
verify(exactly = 2) { | ||
mockClient.performRequest( | ||
mockBaseURL, | ||
Endpoint.GetOfferings(appUserID), | ||
body = null, | ||
postFieldsToSign = null, | ||
any() | ||
) | ||
} | ||
} | ||
|
||
@Test | ||
fun `offerings call is enqueued with delay if on background`() { | ||
mockResponse(Endpoint.GetOfferings(appUserID), null, 200, null, noOfferingsResponse) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👏