I noticed that I get server failures when my EntityStore is unable or unwilling to save a response. Two examples: if memcache goes down, any writes through Dalli will fail and any reads will return nil. Or when I'm using a simple NilStore for testing in Rails, my writes will always do nothing and reads will always return nil (which can be useful for exercising an app's cache code without firing up an instance of memcached).
In both cases, Rack::Cache gets a valid response from Rails but then replaces the body with nil. This happens because MetaStore#store writes the response's body to the entity store (which fails), immediately reads it back from the store (getting nil), and then replaces the response's body with that nil.
Is there any reason for the metastore to replace the response body? Deleting the entity_store.open line from MetaStore#store would fix the two cases I'm having trouble with. That is, the server would still return a response, even if that response can't be cached. This would also eliminate an unnecessary read and get rid of the minute possibility that even a working store could throw away the entry between the write and subsequent read. But maybe I'm overlooking something? Does reading the newly stored body have some beneficial side-effect?
Here's the relevant section in 1.0.3 (and it looks similar in 1.1):
# write the response body to the entity store if this is the
# original response.
if request.env['rack-cache.use_native_ttl'] && response.fresh?
digest, size = entity_store.write(response.body, response.ttl)
digest, size = entity_store.write(response.body)
response.headers['X-Content-Digest'] = digest
response.headers['Content-Length'] = size.to_s unless response.headers['Transfer-Encoding']
response.body = entity_store.open(digest)
I'm experiencing the same issue, when using rack-cache in a Rails project. Using a Rack based server, like unicorn, will throw a Rack::Lint::LintError when caching is enabled and storing the entity fails, for example because memcache is down. Rails and the asset pipeline are serving the asset in the correct way, but rack-cache makes the body nil.
config.action_controller.perform_caching = true
config.cache_store = :mem_cache_store
The backtrace is not much informative, because the error is only detected in the Rack/Unicorn Lint detection:
"GET /application.js HTTP/1.1" 200 1207528 0.0649
app error: Response body must respond to each (Rack::Lint::LintError)
.rvm/gems/ruby-1.9.3-p125/gems/unicorn-4.2.1/bin/unicorn:121:in `<top (required)>'
Solving the problem in Rails is as easy as disabling caching or making sure memcache is running. Due to the difficult error message and backtrace, I would suggest rack-cache should at least give some indication there is something going wrong. The response from the rack-cache middleware is not valid in the Rack response conventions.
This will also occur if the response to be cached is over the 1mb object size limit of memcache. Clearly other problems in this case, but worth mentioning nonetheless. The above patch should fix this issue as well.
I experienced the same problem : our Memcached was down (but we were't aware of it) and rack-cache set response.body = nil for every request it tried to cache... Unicorn failed but we weren't warned because it wasn't rescued by our error-tracking system (Airbrake).
response.body = nil
ERROR -- : app error: undefined method `each' for nil:NilClass (NoMethodError)
ERROR -- : [..]/unicorn-4.6.2/lib/unicorn/http_response.rb:60:in `http_response_write'
ERROR -- : [..]/unicorn-4.6.2/lib/unicorn/http_server.rb:563:in `process_client'
ERROR -- : [..]/unicorn-4.6.2/lib/unicorn/http_server.rb:633:in `worker_loop'
ERROR -- : [..]/unicorn-4.6.2/lib/unicorn/http_server.rb:500:in `spawn_missing_workers'
ERROR -- : [..]/unicorn-4.6.2/lib/unicorn/http_server.rb:142:in `start'
ERROR -- : [..]/unicorn-4.6.2/bin/unicorn_rails:209:in `<top (required)>'
That was quite a mess for debugging... It would be great if this fix could be released in the gem.
This should be closed with pull request #66
Can we get a new release of the gem?
Are there any plans to release a new version of the gem including this fix?