GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
Already on GitHub? Sign in to your account
Images (notably css background images) are entirely incorrect, showing a different image. This hasn't surfaced in Webrick or Unicorn. I've seen it a few times in development, but it happens consistently in staging. Taking a look in the inspector and in logs the urls appear to be correct, but there is no mistaking that the image itself is mixed up—the logo being replaced with a background tile, for example.
Verified in Chrome, Safari, and Firefox (hard reload in each of them). The interchanged images aren't the same across browsers, but the effect is present in all of them.
Here is the basic stack, ff there is any additional info I can provide let me know.
Ruby 1.9.3-p125, Rails 3.2.2, Puma with default of 0-16 threads.
Do you have an reproduction you could show me? Perhaps a rails app?
The app I was testing this out with is our production app, and I can't share it exactly. I can share the gemspec and environment files though.
Only the development config is included, but this would happen in all environments.
Any chance you could build a sample rails app that shows the problem? It's quite difficult to derive the situation from just your config files.
@sorentwo Do you still see the issue if you turn off the asset pipeline by setting config.assets.enabled = false in config/application, run RAILS_ENV=development bundle exec rake assets:precompile and then run the server?
config.assets.enabled = false
RAILS_ENV=development bundle exec rake assets:precompile
Truthfully I returned to using Unicorn after running into the issue, so it took me some time to get a free moment to investigate this.
@ezkl Those precise steps don't quite work out, but not because of anything with Puma. With the asset pipeline disabled no precompilation happens at all, and with it enabled no stylesheets are precompiled, making the exercise futile.
@evanphx I'll try to do this soon. Because the image mixup is random on different servers, and we're loading 20+ images on the affected pages I fear it would be difficult to replicate.
I've seen the same issue myself. I found Rack Cache seemed to be misbehaving, although I didn't get to diagnose the specific cause.
@sorentwo Are you using Rack Cache? I do know Rails 3 injects it into the middleware stack by default.
Yeah, the app is using Rack::Cache. In staging and production it is configured with Memcached/Dalli, nothing is configured in development.
I can repro with Rack::Cache. It is quite simple to repro, just add rack-cache gem, sets config.consider_all_requests_local = true, config.serve_static_assets = true and config.static_cache_control = "public, max-age=2592000" development.rb in a project where you have css, images, etc. Just refresh the pages a few time.
It seems to occur because puma sets env['rack.run_once'] to true and Rack::Cache determines if it's going to make a thread safe request based on this, e.g. /lib/rack/cache/context.rb:48
Either puma should not set env['rack.run_once'] to true or Rack::Cache should test if env['rack.multithread'] is not true as well, i.e.
if env['rack.run_once'] && !env['rack.multithread']
I'll propose this fix to rack::cache project and let this thread know if they accept my pull request.
I just merged @jtblin change in rtomayko/rack-cache#71 so that rack.multithread is checked but puma setting rack.run_once seems pretty wrong here. What's the reasoning behind setting it true in this case?
Puma shouldn't be setting rack.run_once, that's my mistake. I'm finishing up testing of 1.7.0 today and will have it out perhaps shortly and it will include removing setting rack.run_once. I'll also review all the rack options that are set.
Thanks @jtblin, @rtomayko and @evanphx. I never got back to trying to work through this and it haunted me a bit. Awesome to see that this got resolved though.
Thanks a ton @rtomayko and @evanphx!