Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All The Get Alls #298

merged 23 commits into from Nov 4, 2017

All The Get Alls #298

merged 23 commits into from Nov 4, 2017


Copy link

In testing preload_all with Flipper::Cloud, I found that the memoizing and caching adapters weren't working correctly with get_all (which preload_all uses). Several things were subtly not right causing extra net/http calls. I added several specs that verify the number of network calls and tested externally with a few apps I control. This also adds get_all to all the adapters flipper supports out of the box. This ensures that when possible preload_all and get_all do the fewest network calls.

There is some funny business with caching and get_all. The trick with get_all and caching is you don't know the features so you can't do a get_multi for all features. This left two obvious options:

  1. wrap the entire get_all call with its own cache fetch block.
  2. add an extra network call to avoid calling get_all more than necessary when using a caching adapter.

Option 1 meant get_all would be 1 network call which was dope. The downside was it would not share the primed caches from get and get_multi. Additionally, it would mean that all features would need to fit in a single cache value (usually 1MB with memcached). Probably fine, but I didn't want to make assumptions.

Option 2 was to do an unless exists get_all key. If the key was present, then we are moderately sure that both the set of feature keys and each feature is cached, so the cache adapters do at least 2 network calls (get the set of feature keys and then get_multi all the individual feature cache values). The upside is this shares the per feature cache values with get and get_multi. Another upside is that because the cache is per feature, having a lot of features won't overflow a single value like option 1. The downside of this is a minimum of 3 network calls. I guess I'm assuming that cache calls are super fast, so 3 small ones that share between many adapter methods is better than potentially 1 big one that doesn't share.

If it doesn't it means that we lose efficiency because instead it will
do features + get_multi instead of single get_all call.
This was failing before because memoizable and any adapter that didn't
define get_all would do features + get_multi by default. This fixes that
by adding get_all to AS CacheStore adapter and adding a spec that
simulates requests against the memoizer middleware when preloading all.

Still need to add get_all to the other caching adapters likely.
A bit more code but doesn't store them twice.
This prevents getting all features and then checking a feature that is
not in the set of known features from kicking off additional adapter
calls and thus additional network calls.
Ensures that they actually cache correctly and only allow the correct
number of operations through to the wrapped adapter.
had trailing quote
If not memoizing then I don't want to assume anything so just return
normal hash.
@jnunemaker jnunemaker self-assigned this Nov 4, 2017
Copy link
Collaborator Author

Looks like rails 3.2 failures... I'll look at them, get this merged and release 0.11

Copy link
Collaborator Author

It seems unless_exist wasn't added until rails 4, which is why rails 3 tests are failing.

xref rails/rails@b5005b6

@jnunemaker jnunemaker merged commit fc11310 into master Nov 4, 2017
@jnunemaker jnunemaker deleted the get-all branch November 4, 2017 14:54
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
None yet

Successfully merging this pull request may close these issues.

None yet

1 participant