New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Preload enabled features for an Actor - ActiveRecord #190

Closed
mscoutermarsh opened this Issue Nov 8, 2016 · 7 comments

Comments

Projects
None yet
4 participants
@mscoutermarsh

mscoutermarsh commented Nov 8, 2016

Hi,

Would love if there's a way to preload enabled features for an actor. For Product Hunt, we check several features in a single request for current_user. With activerecord this adds up to quite a few queries. Would love to get it down to one.

From browsing source I don't believe this is currently available.

@jnunemaker If you're 👍 on this idea - have any ideas on best way to implement, I'd be happy to work on it.

@AlexWheeler

This comment has been minimized.

Collaborator

AlexWheeler commented Nov 10, 2016

Have you checked out any of the caching options?
https://github.com/jnunemaker/flipper/blob/master/docs/Optimization.md

This would be cool and could see it being really useful, but imo it would have to be custom built for your app, or built as a separate gem that depends on Flipper and then others could use. Flipper doesn't add anything to your actors (i.e. User model) and only expects them to respond to flipper_id. Flipper adapters need to all satisfy the same api and preloading is an active record specific feature.

You might be able to get it working with scopes since they can be preloaded and you would have to chain a few where clauses as you go down the gate hierarchy checking if the user is enabled. Would be hard to preload group gate features though since flipper needs to call a ruby block to check if the actor is enabled.

i.e. Haven't tried this, but something like this might allow you to get the all the features the actor is enabled for through actor gates. Then you'd make the query a scope so you can preload it. Might give an idea to how you could possibly approach this.

class Feature < ::ActiveRecord::Base
has_many :gates, foreign_key: "feature_key", primary_key: 'key'
self.table_name = "flipper_features"
end

  # Private: Do not use outside of this adapter.
  class Gate < ::ActiveRecord::Base
    belongs_to :feature, foreign_key: "feature_key", primary_key: 'key'
    self.table_name = "flipper_gates"
  end

Flipper::Adapters::ActiveRecord::Feature.joins(:gates).where("flipper_gates.key = ? AND flipper_gates.value = ?", "actors", self.flipper_id)

@jnunemaker

This comment has been minimized.

Owner

jnunemaker commented Nov 10, 2016

@mscoutermarsh I definitely have some ideas on this. I'll try to write them up later today.

@jnunemaker

This comment has been minimized.

Owner

jnunemaker commented Nov 12, 2016

I think this could be neat/do-able, but if you have redis/memcached that usually makes flipper snappy enough that pre-loading doesn't matter. At GitHub, we have a cache adapter like the included dalli adapter that we wrap our mysql adapter with. The majority of features are then just loaded from memcached (granted one at a time instead of efficiently).

That said, I'm all about bulk/batch so I like this idea. It has been in the back of my head for quite some time. If one knows the features used on a page, why not make it so the data layer can load them more efficiently, even when cached.

My original thought was to add an adapter method, perhaps get_multi that supported getting many features at once. For adapters to datastores that didn't support this, they could just do N get's. Then, the memoizing adapter could just allow for memoizing get_multi and everything would just work. Anywhere in a before filter, you could just call flipper.adapter.get_multi(...) with the features that you want to preload. Perhaps we even would add preload to the DSL which just wrapped the adapter.get_multi call?

I think this might be a lot more clear/easy once I finish up the v2 adapters, but I would be open to a PR now for sure. I can do any heavy lifting required to backport changes into the v2 branch.

@mscoutermarsh any of that sound like what you were thinking or do you have any other/better ideas?

@mscoutermarsh

This comment has been minimized.

mscoutermarsh commented Nov 13, 2016

@AlexWheeler @jnunemaker Thanks! Very helpful. 😄

I'm going to take a look at caching first. I think we'll see a big improvement there.

I like the approach of using get_multi. We currently use that approach in many places in our app and it's worked well. I think it will work well here too. I'll dig into that more after adding caching to our app.

@jnunemaker

This comment has been minimized.

Owner

jnunemaker commented Nov 14, 2016

Cool. Sounds good. Let us know if we can help with the caching. Should be straightforward, but I have not used the dalli adapter yet, so who knows. :) I'm going to close this, but if you still feel it is needed after adding caching, let me know.

@jnunemaker jnunemaker closed this Nov 14, 2016

@gshutler

This comment has been minimized.

Contributor

gshutler commented Nov 17, 2016

We're just looking at this ourselves, with a few flags the overhead during a request was negligible at about 1ms a piece but we're going through a period where an individual request may need to check 20 or more and this adds enough to the response time it's been worth us looking at.

We're using the Redis adapter so what we're looking at is calling features and then doing a set of pipelined calls to get the state of them all, resulting in two roundtrips rather than N. This would fit within the idea of get_multi I think.

Would you like a PR with what we end up with, or are you working on it already?

@jnunemaker

This comment has been minimized.

Owner

jnunemaker commented Nov 17, 2016

@gshutler I would definitely be open to looking at a PR. I always find PR's more helpful than general discussion as once you see a solution it makes things more concrete. I am not currently working on preloading because at GitHub we've found the memcached route to be fast enough for us, even with many checks per page.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment