New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batching set requests #68
Comments
Hello there! Is there a specific reason that only batch This is exacerbated in cases when I need to read up to a few hundred uniform keys that were previously stored and collect values in an array. With current API I don't see a nice and clean way to do this except swapping Carlos with some other implementation that supports this. Can you please clarify if this is at all possible or will be possible in the future and if so, what would be the best way to implement this? Thanks. |
Hi @explicitcall, thanks for reaching out! I have to admit I didn't think about your use case when listing the requirements for this issue. Can you elaborate a bit more on your use case, just to have it better pictured when I design the API? Otherwise I could think about storing the array into a single key instead of using multiple "uniform keys" as you described. For example, if you could pass an array of keys to this API, you should expect to get the success callback only when all of them can be fetched, and the failure callback even if just one of them cannot. This may or may not be the intended behavior, but one has to be specified to avoid an undefined outcome. Thanks |
FYI, I created #119 for follow-ups and better specification |
@explicitcall I pushed a branch called If you're satisfied with the solution, I'll proceed writing tests and proper documentation in the Thanks! |
Thank you very much for a quick reply @vittoriom! My use case is fetching and uploading a lot of records (hundreds, potentially thousands) from a server API where most of them have references to each other. To establish valid references, I need to store ids of those records and as this process can span multiple API calls and the app has to be resilient, most of the ids are stored in the cache gradually as soon as they come in. This way the app can resume the process at any point if it crashed or was killed by a user. I think the ideal Carlos API would be having |
In the meantime the simple solution that I found is: extension BasicCache {
func getSync(k: KeyType) -> OutputType? {
var result: OutputType? = nil
let semaphore = dispatch_semaphore_create(0)
get(k)
.onSuccess {
result = $0
dispatch_semaphore_signal(semaphore)
}
.onFailure { _ in
dispatch_semaphore_signal(semaphore)
}
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)
return result
}
} This isn't ideal when it runs on the main thread, but running the whole series of |
An exception-aware alternative is func getSyncEx(k: KeyType) throws -> OutputType {
var result: OutputType? = nil
var error: ErrorType? = nil
let semaphore = dispatch_semaphore_create(0)
get(k)
.onSuccess {
result = $0
dispatch_semaphore_signal(semaphore)
}
.onFailure {
error = $0
dispatch_semaphore_signal(semaphore)
}
dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)
if let e = error {
throw e
}
return result!
} The name is different as it turns out in case of overloading with the same function name, swift compiler isn't smart enough to pick a correct implementation without explicit type declarations at the caller's point. |
Hi @explicitcall, I actually like the idea of My only concern is: do you really need a sync API or it would be fine for you if the API would still be async, but only call the completion handler when all the batched keys are done? |
@vittoriom async API is totally fine for me and I think it should stay that way. It's easy to convert async to sync (with dispatch_semaphore), while the reverse is not true. Thanks a lot for your help! |
Regarding your implementations, they look fine although I'm not sure about the idea of including sync APIs in Carlos. Also, as a personal recommendation I would extend |
All right, then I'll continue implementing |
We aren't yet sure what we're gonna do with Carlos and where do we see it in the future hence to avoid spreading misinformation about the future development of Carlos I'm closing this issue. Also this issue has not been updated since 4 years ago and I would consider it stale. |
In preparation for #65, #66 and other web CacheLevels to include in Carlos, a function
batch
, together with a protocol extension shoud be added, that takes an integer N from 0 to +inf, builds a wrapper that batches allset
requests before sending them to the underlying cache, and passes throughclear
andonMemoryWarning
calls.get
requests will go through a soft internal cache and in case of failure will be dispatched to the underlying cache. CallingonMemoryWarning
will also force flush the soft cache.The soft cache will be implemented as a memory cache. Biggest question until now: how to properly handle when the app closes on multiple targets (iOS, Mac OS X, watchOS, tvOS?)
The text was updated successfully, but these errors were encountered: