-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Puma not restarting (or at least not loading new current) when notified with SIGUSR2 #416
Comments
I am having the same problem across different servers. Sometimes it doesn't restart, other times it seems that the process is stopped but never started again. This is on 2.7.1. Edit: Actually, this may be that I deployed between the update from 2.6 -> 2.7.1. I'll deploy a bunch more today so I'll see if the problem continues with 2.7.1. |
I've experienced something like this.
It reports "Command restart sent success", and then it just kills the puma workers and the master process ends. Here's what the log says:
|
Same here, I have to stop then start puma server sometimes. I've noticed the PID actually changed when running restart or phased-restart, but still applying the old code. |
What version of ruby are you using? |
@evanphx The development side is 2.0.0-p353 and the production is 2.0.0-p195, and puma version is 2.7.1, is that help? Thanks. |
ruby 1.9.3p0 (2011-10-30 revision 33570) [x86_64-linux] |
I'm seeing the same thing. Running "pumactl -S /my_app/path/shared/pids/puma.state restart" (or phased-restart) keeps the master process pid, and the 2 clustered workers restart with new pids, but the master and clustered workers continue to work out of the previous release path. I'm running puma (2.7.1) on ruby 1.9.3p448 on ubuntu. Excerpt from ps:
Excerpt from lsof: (note the current working directory shown for the process is correct but gems are running out of the old path)
Any ideas?? |
Update: We did an strace on the puma pid and watched it during a pumactl restart. We noticed that everything was running out of the correct path until the standard rails boot.rb file (/my_app/path/[new_release]/config/boot.rb) fired and loaded the Gemfile from the previous release's path. When looking at boot.rb it was clear that it if the environment variable BUNDLE_GEMFILE already exists, it would use that rather than re-evaluating and setting it. That env would have already been set in the puma session by /my_app/path/[old_release]/bin/puma (the binstub created by bundler) and rather than using "/my_app/path/current/Gemfile" it would have used the ".realpath". In short, the final value of BUNDLE_GEMFILE would be '/my_app/path/[old_release]/Gemfile' rather than the desired '/my_app/path/current/Gemfile' and so all gemfiles would run out of that path. Our solution is to set the BUNDLE_GEMFILE environment variable in our puma upstart script to "/my_app/path/current/Gemfile". |
I can confirm @paustin01's findings because my Puma process's proc directory enviro file holds the previous revision's BUNDLE_GEMFILE and not the new release directory which Puma is running from (as mentioned previously, observable via
|
Can anyone provide a simple script or program that can be used to reproduce the behavior? |
Is there a reason this got closed? This is still very much an issue, and it's trivial to reproduce: all you need is any project running puma from within a symlinked directory (e.g., deployed by capistrano), install new gems in a new revision, and reload puma with SIGUSR2 with re-use the BUNDLE_GEMFILE env var in Rail's boot.rb thus not loading new gems. |
This is definitely still an issue. It's the only reason our apps are still using Unicorn right now. |
@sorentwo I worked around this issue by adding |
@jcoleman: thanks, that is really helpful to know about. I wasn't aware of the specific point of failure here (the gems/Gemfile), it just seemed like we had random restart failures. |
@sorentwo Are you using the prune_bundler option? It's documented here: https://github.com/puma/puma/blob/master/DEPLOYMENT.md#restarting |
@evanphx: Nope, I wasn't. In fact, I'd never seen that wiki page. The last time I was setting things up was in November 2013, a little before the page was available. That's an excellent tip though. I'll try an experiment next week and let you know how things go. |
@evanphx I didn't realize that the |
We're seeing this issue also using Puma 2.9.0 deployed via Chef (it uses the same |
Phased restart still doesn't work even with @jcoleman's solution. Running puma 2.9.1. But with hot restart, it seemed to work without setting the env variable. |
My solution ended up not fixing it for me either. The |
|
hot restart stops loading the new context in puma 2.9.2 |
Tell me more: what to you mean by new context? What does your config look like? Could you provide some output from puma? |
What I mean by context is the line "Gemfile in context: /path/to/Gemfile" in puma.log
|
I'm seeing this same issue as well (phased restart doesn't load new code) Running Puma 2.11.2 on Ruby 2.2.2 We're deploying with Capistrano. Here's my puma.rb:
We're issuing this command to start the phased restart:
When we issue the phased-restart the worker processes do restart (new PIDs), but they don't get the new code. Where should I look for more information to figure out what's happening? |
After coming back to this on a fresh day, I'm really not sure what I was experiencing. At one point in time my restarts definitely weren't working properly (as in, the new code definitely wasn't being loaded). This could have easily been a misconfiguration on my part. After I noticed the problem I started testing by looking at the output from
When I was deploying new code I could see that worker restart, but the content loaded ( Perhaps I should open a new issue about updating the Process tag on the workers to match the new directory. At the moment I'm not even sure where it's pulling the current release from since i'm running this out of the |
To my surprise, I got this to work. Using puma v2.15.3 # in /config/puma.rb
app = "my_app"
root = "/home/deployer/apps/#{app}"
workers 4
threads 1, 2
rackup DefaultRackup
environment ENV['RAILS_ENV']
daemonize true
directory "#{root}/current"
pidfile "#{root}/shared/tmp/pids/puma.pid"
stdout_redirect "#{root}/shared/log/#{Time.now.to_i}_production.log", "#{root}/shared/log/#{Time.now.to_i}_production_errors.log"
bind "unix:/tmp/puma.socket"
prune_bundler It is crucial to stop and start puma with these settings to have the master process pick up :directory and, possibly, :prune_bundler directives. Then you can NB, that it may take a few moments for workers to pick up the new code, and also |
This issue seems to recur in 3.9.0 and 3.9.1. 3.8.2 works totally fine. |
In our case [ruby 2.3.3p222]
Puma reloads ok on SIGUSR2 and doesn't reload on SIGUSR1. It feels like preload_app gets automatically turned on somehow. |
This may be related, using Just observed something interesting using puma's phased restarts, old code was running against new code in the same process.... I think it has to do with how long it takes for our app code to load, about 3-4 seconds, then boot, about 6-7 seconds
IN our require 'benchmark'
require 'logger'
app_stats = {pid: Process.pid}
time = Benchmark.realtime {
# Load the Rails application.
require File.expand_path('../application', __FILE__)
}
app_stats[:load_time] = "#{format('%f', time)} seconds"
time = Benchmark.realtime {
# Initialize the Rails application.
Rails.application.initialize!
}
app_stats[:boot_time] = "#{format('%f', time)} seconds"
require File.expand_path('../../lib/app_formatter', __FILE__)
logger = Logger.new(STDOUT)
logger.formatter = AppFormatter.new
logger.info({app_info: app_stats}.to_json) In our config/puma.rb, we have, in part: workers 2
threads 32, 32
# AWS ELB "Idle timeout" is set to 30s. In order to prevent ELB 500s from unexpected connection closes, ensure that the backend persistent connection timeout is greater than this idle timeout. https://docs.aws.amazon.com/ElasticLoadBalancing/latest/DeveloperGuide/config-idle-timeout.html
worker_shutdown_timeout 32
prune_bundler If I flatten out the timeline:
https://github.com/puma/puma/blob/master/DEPLOYMENT.md#restarting
|
I experienced a similar issue with https://gist.github.com/x-yuri/52024f512bed39a2a0bd4c6e82d04c9f P.S. Well, not that simple a script, but what did you expect?... |
Yeah. I'm closing this one and let people create separate ones. This is so old it probably has a 1m long beard... |
That was the trick for me too: We were using capistrano and so had symlinked deploys. Needed to specify |
When my cap deploys, it sends a SIGUSR2 to Puma - tried manually with kill and with pumactl.
pumactl says the server will restart, but puma doesn't run the new code (the message is ' Command restart sent success').
To get the new code served, puma needs an hard restart (i.e. need to send it a SIGINT and then start).
I can't confirm if with SIGUSR2 puma doesn't restart or simply doesn't change working directory - as there is a new deploy. What is clear is that it keeps running the old code.
Setup is ubuntu 12.04, rails 4 and Ruby 1.9.3. Puma is 2.6.0.
Thanks.
The text was updated successfully, but these errors were encountered: