-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
requests failing sometimes #274
Comments
This is not issue of Puma, Heroku free instances are automatically sleeping after some time of inactivity... |
Yes, but I'm not on Heroku. However I am having similar problems with Unicorn too (+ my workers are even dying on Unicorn), so maybe this is a problem with my app. Any idea what it could be? |
Are you experiencing same issue with 8:8 configuration? Btw: sorry for my previous response, I misunderstood your previous question. |
Let me get back to you at the end of the week, I'll do some testing. Thanks! |
Ok I am back on Puma. I still have this problem though. I think the 'slowness' coincides with this error in my puma error log:
It seems when this happens the client has to wait a long time before he gets a response. I did try 8:8 thread config (and cluster of 8 workers also). I am on MRI 1.9.3-p429, nginx, running on VPS 32bit ubuntu with 2GB RAM, 1.4GB used. Would appreciate any speedy ideas as I'm running this in production so it's kinda awkward for me :) The above is the only problem I can see in my logs. I'm using puma master branch from about a week ago. |
@PetrKaleta some more info, it seems the EPIPE error in puma error log conincides a bit with errors from nginx log.
However it's not always the case... Sometimes nginx has this error instead:
I'm not sure if these are related, but that's all I could find. It seems like in some cases my worker can't start in time to serve the request (only my assumption!). I don't understand why it has to start anyway, since this is not happening after puma restart, it's happening on random requests when puma is already "hot". Is there any way I could find out if my workers are dying and why? |
Also I should update the title of the issue, the bigger problem for me is that sometimes I get an error page from nginx (I presume), basically the page loads for some time and then I get an error, probably 504 from nginx. |
I have same problem too. It occured under high traffic. |
I had the same problem on heroku. Reverting to puma 2.0.1 fixed this currently for me. |
@tkoenig What version were you on? |
Master at the time. Maybe it could have been a low memory issue? |
@mrbrdo Does this still happen with 2.2.2? |
Not sure but I am not getting any reports about issues anymore... I did upgrade RAM on the server though. Im still on master I think |
It happened to me with puma 2.1.1. I'm gonna try the latest version soon and report back if I still face this issue. |
Still happening with Puma 2.4.0… any idea? |
For me it was too little RAM on a VPS I think. Do you have enough RAM? |
Hello, I do not try it more, but server has enough RAM (dedicated server). |
Still has 3GB in free - must be enough right? |
@evanphx any more information that you need to fix this? At 2.5.1 now…
Edit: for those who ask about memory, 16GB total, 8GB used, 7GB free. |
I am seeing the same issue as @kenips on MRI 2.0.0-p247 and puma 2.5.1 whether I restart via
Edit: The same issue occurs with either 1 or 2 workers, and any thread count. |
I suggest to open a new issue since it's not the same problem, I was getting these errors seemingly randomly not when performing hot/cold restart. So it's not the same issue. |
This issue happens on Jruby-1.7.4 as well. |
If anyone is still seeing this, please open a new issue with backtrace and reproduction information. |
Is there a specific issue for this yet or a specific issue naming convention that should be defined? The logs that I have are posted on issue #360 . I'll see if I can get a deeper trace. |
This is all I have it repeats indefinitely for each client.
|
Well the |
I just double checked to see if multiple instances of puma may be an issue. However locusio still DOS'es a single instance of puma after hitting it around 300-350RPS. Also the errors are the same. |
Pull / Fix #369 |
Hey,
I'm experiencing very slow requests, it seems especially if I don't access the server for a while, it feels similar to how Heroku works on the free version (like it has to start up the whole rails app first). After the first request goes through it works pretty fast (occasionally some hickups though).
Any idea? I'm using MRI with 3 workers and thread config 1:8
EDIT: I updated the description of the problem below... I am getting errors, like sometimes puma can't boot to serve the request.
The text was updated successfully, but these errors were encountered: