Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Rack::Lock should not unlock until #close is called on body #87

Closed
jfirebaugh opened this Issue · 10 comments

5 participants

@jfirebaugh

The body might generate (stream) data dynamically in #each, and that should happen with the protection of the lock.

This comes up in Rails in the context of this ticket.

@josh

+1 can you put together a patch for this

@tenderlove
Owner

use a proxy object to call close on the mutex when the body is closed. closed by 3bf8655

@josh

The var was originally called lock so you can swap in a Monitor instead. Monitor doesn't have lock and unlock afaik. But its not a huge deal, no tests even. Whatevers :P

@jfirebaugh

I was wondering if there was an alternative to Proxy; I think I like the Module + Extend solution better. A Proxy object really should delegate via method_missing, since it otherwise can miss parts of the interface contract. Your patch is missing #to_path delegation, for example. And we don't want to have to patch Rack::Lock every time a optional new body method is added to the spec.

@jfirebaugh

The Proxy implementation was committed I see. Thoughts on the #to_path/future-proofing issue?

@gucki

I'm not 100% sure, but starting form this commit my error log from time to time fills with the following errors, so I think there's a race condition somewhere:


  E, [2010-12-19T14:56:42.762564 #25529] ERROR -- : Read error: #<ThreadError: deadlock; recursive locking>
  E, [2010-12-19T14:56:42.762691 #25529] ERROR -- : /home/r334/.bundle/ruby/1.9.1/bundler/gems/rack-3bf865524e23/lib/rack/lock.rb:25:in `lock'
  /home/r334/.bundle/ruby/1.9.1/bundler/gems/rack-3bf865524e23/lib/rack/lock.rb:25:in `call'
  /home/r334/.bundle/ruby/1.9.1/bundler/gems/rails-6d80f3a1ba52/railties/lib/rails/engine.rb:414:in `call'
  /home/r334/.bundle/ruby/1.9.1/bundler/gems/rails-6d80f3a1ba52/railties/lib/rails/railtie/configurable.rb:30:in `method_missing'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:641:in `process_client'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:714:in `block in worker_loop'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:712:in `each'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:712:in `worker_loop'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:603:in `block (2 levels) in spawn_missing_workers'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:600:in `fork'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:600:in `block in spawn_missing_workers'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:596:in `each'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:596:in `spawn_missing_workers'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:610:in `maintain_worker_count'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:270:in `start'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/lib/unicorn.rb:29:in `run'
  /home/r334/.bundle/ruby/1.9.1/gems/unicorn-1.1.5/bin/unicorn:124:in `'
  /home/r334/.bundle/ruby/1.9.1/bin/unicorn:19:in `load'
  /home/r334/.bundle/ruby/1.9.1/bin/unicorn:19:in `'
@jfirebaugh

The likely cause is that Rack::Lock doesn't unlock when the app throws an exception. I will have a patch for this later today.

@somebee

I'm experiencing a lot of problems after this patch was applied, with rails-processes crashing in passenger/apache, giving this error-message:

[ pid=30130 thr=3312360 file=utils.rb:176 time=2010-12-25 14:21:26.045 ]: *** Exception ThreadError in application (deadlock; recursive locking) (process 30130, thread #Thread:0x000000006515d0):
from /mypath/shared/gems/ruby/1.9.1/bundler/gems/rack-1281bdc5c90d/lib/rack/lock.rb:33:in lock'
from /mypath/shared/gems/ruby/1.9.1/bundler/gems/rack-1281bdc5c90d/lib/rack/lock.rb:33:in
call'

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.