New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pumactl restart not working #563
Comments
Puma 2.9.0, ruby 2.0.0p195 environment 'production'
daemonize true
app_path = '/home/deploy/toro/current'
directory app_path
stdout_redirect "#{app_path}/log/puma.stdout.log", "#{app_path}/log/puma.stderr.log"
workers 8
threads 1,2
bind "unix://#{app_path}/tmp/sockets/puma.sock"
state_path "#{app_path}/tmp/pids/puma.state"
pidfile "#{app_path}/tmp/pids/puma.pid"
activate_control_app "unix://#{app_path}/tmp/sockets/pumactl.sock", { no_token: true } |
I am not using any capistrano at the moment and the way I restart puma is to run this bash command from the application directory. Works every time, however if you did bundle update or any Gemfile changes and then puma instance may not restart successfully (rare). #!/bin/bash kill -s SIGUSR2 $(cat tmp/pids/puma.pid) I see you are doing workers 8 Which I think it means you are doing cluster server which I have no experience in. You will have to wait for some one from Puma to reply. |
same problem with my config:
|
I'm having a similar issue with |
It looks like pumactl isn't checking for a response from the Success:
Failure:
My current hacky workaround is to try to restart, check if the PID is running, and if not, try a cold start. If that fails, then I can handle the error (neither Process nor a system call return any puma errors). |
This is the command I used, and it works all the time |
kill -s USR2 Used to take down the server but would not start it back up. But now it works for me, puma gracefully restarts the rails server by killing old threads one at a a time and then starting new ones. I tested it in CentOS 7 (64-Bit) and in Debain 7.7 (64-Bit) and both works. |
this is not working at all under RHEL 6 64-bit.. When I start puma (non-daemonized) in one terminal (
why would it spit out "available commands" like that ??? |
I'm sorry about that, I should just remove the start functionality from |
And another vector for bugs. In fact, I should just reimplement it in terms of exec'ing normal puma. |
@urkle what commands were you running? You didn't paste them in. |
I was running It would be better to reimplement it in terms of exec'ing puma, as right now (from others on this bug) running pumactl restart will start puma if it's not running (though presumably via the wrong command)
|
Any progress in getting this properly fixed? |
Regardless of that, the issue still remains pumactl start sets the wrong process name so restarting does not function. |
I encountered the same problem with the config below (no threads 8, 32
workers 4
bind 'tcp://0.0.0.0:3000'
daemonize ruby 2.2.3p173 |
Pumactl: set correct process name. Fixes #563
For future Googlers like me, now it's:
(credit to @SaimonL) |
It's 2021 and I still can't get I guess I'll just kill as @localhostdotdev suggests. |
A bunch of stuff has changed in puma since this issue was first opened. @omenking if you're experiencing a similar problem in Puma 5, please feel free to open a new issue. |
#409
I don't think this problem every really went away for me, and I've spent today trying to fix it, with no luck.
It is present for me in 2.7.1, 2.9.0, and master branch.
Again, the fact that restart says "Command restart sent success" but the actual Puma process is down is pretty much horrible. It might be true that the command was sent successfully, but I don't care about that, I care if the #@?! Puma process has in fact restarted. The definition of success when you ask something to restart is if it comes back up. This is something Apache and Nginx understand, and they do report a failure if attempts to restart them fail.
Puma really has to block on this, and report failure when it happens.
For now I overrode the built in capistrano restart task to simply call stop and start. Not really a great solution.
The text was updated successfully, but these errors were encountered: