-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Move promotion logic inside workers #38
Conversation
30d0b99
to
d5dddcc
Compare
5483715
to
7dc59c4
Compare
Fix: #32 This remove a lot of busy work from the server, and allow to promote as soon as the condition is reached. The small downside is that we need a lock between processes to ensure two workers won't try to promote at the same time. This also means that the mold selector is removed. It wasn't that interesting anyway, because the Unicorn/Pitchfork architecute introduce a bias in request distribution, worker#0 will almost always get requests first so will almost always be the best candidate for promotion.
7dc59c4
to
7239fba
Compare
|
||
class ReforkingTest < Pitchfork::IntegrationTest | ||
if Pitchfork::HttpServer::REFORKING_AVAILABLE | ||
def test_reforking |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🎉 Much easier to read
|
||
def initialize(name) | ||
@name = name | ||
@file = Tempfile.create([name, '.lock']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For my own knowledge:
Original problem: Using a POSIX mutex was problematic because if the process holding the mutex died, it wouldn't unlock, and would deadlock the promotion flow thereafter.
Solution: A file lock solves this problem because the lock against the file is automatically released by the OS when the process terminates?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I explored several approach to IPC mutexes.
First to use a pthread_mutex_t
inside a shared memory page. And as you point out it wouldn't cleanup on exit, which in a context where we might SIGKILL processes ourselves is not OK.
Then I tried sysV semaphores, they do have automatic cleanup, and we already use them via semian
, but the API is very wonky to use, especially if you are trying to implement a mutex.
In the end I figured flock
was the least worse. It's already available in ruby core, no need for more C, the only downsides are:
- We need to write a file, which I would have liked to avoid.
- We need to carefully re-open that file after fork.
But overall it seems to work well.
Fix: #32
This remove a lot of busy work from the server, and allow to promote
as soon as the condition is reached.
The small downside is that we need a lock between processes to ensure
two workers won't try to promote at the same time.
This also means that the mold selector is removed. It wasn't that
interesting anyway, because the Unicorn/Pitchfork architecute introduce
a bias in request distribution, worker#0 will almost always get requests
first so will almost always be the best candidate for promotion.