A Rails plugin for maintainable and high-efficiency caching.
Copyright 2007, 2008 Cloudburst, LLC. Licensed under the AFL 3. See the included LICENSE file. Portions copyright 2006 Chris Wanstrath and used with permission.
The public certificate for the gem is here.
Interlock is an intelligent fragment cache for Rails.
It works by making your view fragments and associated controller blocks march along together. If a fragment is fresh, the controller behavior won't run. This eliminates duplicate effort from your request cycle. Your controller blocks run so infrequently that you can use regular ActiveRecord finders and not worry about object caching at all.
Invalidations are automatically tracked based on the model lifecyle, and you can scope any block to an arbitrary level. Interlock also caches content_for calls, unlike regular Rails, and can optionally cache simple finders.
Interlock uses a tiered caching layer so that multiple lookups of a key only hit memcached once per request.
First, compile and install memcached itself. Get a memcached server running.
You also need either memcache-client or memcached:
sudo gem install memcache-client
Then, install the plugin:
script/plugin install git://github.com/fauna/interlock.git
Lastly, configure your Rails app for memcached by creating a config/memcached.yml file. The format is compatible with Cache_fu:
defaults: namespace: myapp sessions: false client: memcache-client development: servers: - 127.0.0.1:11211 # Default host and port production: servers: - 10.12.128.1:11211 - 10.12.128.2:11211
Now you're ready to go.
Note that if you have the memcached client, you can use client: memcached for better performance.
Interlock provides two similar caching methods: behavior_cache for controllers and view_cache for views. They both accept an optional list or hash of model dependencies, and an optional :tag keypair. view_cache also accepts a :ttl keypair.
The simplest usage doesn't require any parameters. In the controller:
class ItemsController < ActionController::Base def slow_action behavior_cache do @items = Item.find(:all, :conditions => "be slow") end end end
Now, in the view, wrap the largest section of ERB you can find that uses data from @items in a view_cache block. No other part of the view can refer to @items, because @items won't get set unless the cache is stale.
<% @title = "My Sweet Items" %> <% view_cache do %> <% @items.each do |item| %> <h1><%= item.name %></h1> <% end %> <% end %>
You have to do them both.
This automatically registers a caching dependency on Item for slow_action. The controller block won't run if the slow_action view fragment is fresh, and the view fragment will only get invalidated when an Item is changed.
You can use multiple instance variables in one block, of course. Just make sure the behavior_cache provides whatever the view_cache uses.
See ActionController::Base and ActionView::Helpers::CacheHelper for more details.
Interlock 1.3 adds the ability to cache simple finder lookups. Add this line in config/memcached.yml:
Now, whenever you call find, find_by_id, or find_all_by_id with a single id or an array of ids, the cache will be used. The cache key for each record invalidates when the record is saved or destroyed. Memcached's multiget mode is used for maximum performance.
If you pass any parameters other than ids, or use dynamic finders, the cache will not be used. This means that :include works as expected and does not require complicated invalidation.
See Interlock::Finders for more.
You will not see any actual cache reuse in development mode unless you set config.action_controller.perform_caching = true in config/environments/development.rb.
If you have custom render calls in the controller, they must be outside the behavior_cache blocks. No exceptions. For example:
def profile behavior_cache do @items = Item.find(:all, :conditions => "be slow") end render :action => 'home' end
You can write custom invalidation rules if you really want to, but try hard to avoid it; it has a significant cost in long-term maintainability.
Also, Interlock obeys the ENV['RAILS_ASSET_ID'] setting, so if you need to blanket-invalidate all your caches, just change RAILS_ASSET_ID (for example, you could have it increment on every deploy).
The support forum is here.
Patches and contributions are very welcome. Please note that contributors are required to assign copyright for their additions to Cloudburst, LLC.