Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Batching set requests #68

Closed
16 tasks
vittoriom opened this issue Sep 22, 2015 · 12 comments
Closed
16 tasks

Batching set requests #68

vittoriom opened this issue Sep 22, 2015 · 12 comments
Labels
Milestone

Comments

@vittoriom
Copy link
Contributor

In preparation for #65, #66 and other web CacheLevels to include in Carlos, a function batch, together with a protocol extension shoud be added, that takes an integer N from 0 to +inf, builds a wrapper that batches all set requests before sending them to the underlying cache, and passes through clear and onMemoryWarning calls. get requests will go through a soft internal cache and in case of failure will be dispatched to the underlying cache. Calling onMemoryWarning will also force flush the soft cache.
The soft cache will be implemented as a memory cache. Biggest question until now: how to properly handle when the app closes on multiple targets (iOS, Mac OS X, watchOS, tvOS?)

  • Investigate whether it's possible to react on app termination on iOS/tvOS/MacOS/watchOS 2
  • Investigate whether it's possible to have a soft cache of method calls to the decorated cache
  • Investigate whether it's possible to have a soft cache for the values to set on the decorated cache
  • Basic API
  • Implement get: check the soft cache (?) or forward the call to the decorated cache
  • Implement set: enqueue the call (and save the value in the soft cache?). When the queue has a determined number of elements, flush it (with the right order of calls).
  • Consider whether it's possible to group calls by key and only call the latest one for the same key when flushing the queue
  • Implement clear: forward the call to the decorated cache, clear the soft cache
  • Implement onMemoryWarning: forward the call to the decorated cache, flush the soft cache
  • Write tests
  • Write code documentation
  • Add sample code
  • Update README.md
  • Update Wiki
  • Update CHANGELOG.md
  • Update Github milestone
@vittoriom vittoriom modified the milestones: Future, 0.4.1 Sep 22, 2015
@vittoriom vittoriom modified the milestones: Future, 0.5 Oct 10, 2015
@MaxDesiatov
Copy link

Hello there!

Is there a specific reason that only batch set is mentioned? Currently the biggest drawback of using Carlos for me is that all get calls are asynchronous and there is no way to wait until the get call is finished synchronously.

This is exacerbated in cases when I need to read up to a few hundred uniform keys that were previously stored and collect values in an array. With current API I don't see a nice and clean way to do this except swapping Carlos with some other implementation that supports this.

Can you please clarify if this is at all possible or will be possible in the future and if so, what would be the best way to implement this?

Thanks.

@vittoriom
Copy link
Contributor Author

Hi @explicitcall, thanks for reaching out!

I have to admit I didn't think about your use case when listing the requirements for this issue.
Your use case makes sense, and I'm thinking about a way to add a batch-get API to Carlos in the next release.

Can you elaborate a bit more on your use case, just to have it better pictured when I design the API? Otherwise I could think about storing the array into a single key instead of using multiple "uniform keys" as you described.

For example, if you could pass an array of keys to this API, you should expect to get the success callback only when all of them can be fetched, and the failure callback even if just one of them cannot. This may or may not be the intended behavior, but one has to be specified to avoid an undefined outcome.

Thanks

@vittoriom
Copy link
Contributor Author

FYI, I created #119 for follow-ups and better specification

@vittoriom
Copy link
Contributor Author

@explicitcall I pushed a branch called 119_BatchGet with a first draft of a possible implementation.
You can now call batchGet on any CacheLevel passing the list of keys you want to fetch, and onSuccess will only be called when all of them succeeded. onFailure is called instead as soon as one of them fails.

If you're satisfied with the solution, I'll proceed writing tests and proper documentation in the README.md as well.

Thanks!

@MaxDesiatov
Copy link

Thank you very much for a quick reply @vittoriom!

My use case is fetching and uploading a lot of records (hundreds, potentially thousands) from a server API where most of them have references to each other. To establish valid references, I need to store ids of those records and as this process can span multiple API calls and the app has to be resilient, most of the ids are stored in the cache gradually as soon as they come in. This way the app can resume the process at any point if it crashed or was killed by a user.

I think the ideal Carlos API would be having batchGetAll and batchGetAny methods, where the former succeeds when all records are successfully fetched and the latter succeeds even when some of the record fetches failed.

@MaxDesiatov
Copy link

In the meantime the simple solution that I found is:

extension BasicCache {
  func getSync(k: KeyType) -> OutputType? {
    var result: OutputType? = nil
    let semaphore = dispatch_semaphore_create(0)
    get(k)
      .onSuccess {
        result = $0
        dispatch_semaphore_signal(semaphore)
      }
      .onFailure { _ in
        dispatch_semaphore_signal(semaphore)
      }

    dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)
    return result
  }
}

This isn't ideal when it runs on the main thread, but running the whole series of get calls in a background queue does the job.

@MaxDesiatov
Copy link

An exception-aware alternative is

  func getSyncEx(k: KeyType) throws -> OutputType {
    var result: OutputType? = nil
    var error: ErrorType? = nil
    let semaphore = dispatch_semaphore_create(0)
    get(k)
      .onSuccess {
        result = $0
        dispatch_semaphore_signal(semaphore)
      }
      .onFailure {
        error = $0
        dispatch_semaphore_signal(semaphore)
    }

    dispatch_semaphore_wait(semaphore, DISPATCH_TIME_FOREVER)

    if let e = error {
      throw e
    }

    return result!
  }

The name is different as it turns out in case of overloading with the same function name, swift compiler isn't smart enough to pick a correct implementation without explicit type declarations at the caller's point.

@vittoriom
Copy link
Contributor Author

Hi @explicitcall, I actually like the idea of batchGetAll and batchGetAny! My current implementation on the branch I told you before is basically batchGetAll, I'll include also a batchGetAny before merging into master.

My only concern is: do you really need a sync API or it would be fine for you if the API would still be async, but only call the completion handler when all the batched keys are done?

@MaxDesiatov
Copy link

@vittoriom async API is totally fine for me and I think it should stay that way. It's easy to convert async to sync (with dispatch_semaphore), while the reverse is not true.

Thanks a lot for your help!

@vittoriom
Copy link
Contributor Author

Regarding your implementations, they look fine although I'm not sure about the idea of including sync APIs in Carlos. Also, as a personal recommendation I would extend CacheLevel instead of BasicCache, and use onCompletion instead of onSuccess and onFailure, since in this case you would also catch the case where the request is canceled. In the onCompletion handler, you will get the possible result (optional) and the possible error (optional).

@vittoriom
Copy link
Contributor Author

All right, then I'll continue implementing batchGetAll and batchGetAny and will include them in the next Carlos release. Thank you for your input!

@ivanlisovyi
Copy link
Contributor

We aren't yet sure what we're gonna do with Carlos and where do we see it in the future hence to avoid spreading misinformation about the future development of Carlos I'm closing this issue. Also this issue has not been updated since 4 years ago and I would consider it stale.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants