-
-
Notifications
You must be signed in to change notification settings - Fork 280
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Memory Leak after update from 3.0.9 to 3.1.2 #401
Comments
Hi @nickooolas Thanks for reporting this, I will look into it. Could you share your |
@nickooolas also, if you could tell more about your workers, if they are busy all the time, if they do any heavy processing. I have an app running on Heroku (low SQS activity) and I didn't see any spike so far. |
@phstc thanks for getting back so quickly, here's the details:
My workers are actually quite often not busy, just sitting polling queues (generally empty receives), sitting idle for probably 80% of the time, but I still have the delay quite low (5) checking quite regularly, due to wanting to be responsive when there is work to be done. The nature of the worker is quite straightforward - a bit of XML parsing, a couple of database writes, and then passing off to a service class for some PDF generation and finally some Slack notifications etc.. What else could I run to debug the cause of this with the shift to |
hm @nickooolas one of the biggest changes in between 3.1.x and 3.0.x is the use of module Shoryuken
class Launcher
private
def executor
Concurrent::FixedThreadPool.new(5, max_queue: 5) # given that 5 is your concurrency
end
end
end |
@nickooolas actually this |
Hey @phstc - I ran the monkey patch: Let me know if there are any other things you'd like me to try out. Otherwise, if it is alright for you, then what else possibly could I do to isolate an issue with my environment? Cheers! |
Hi, I'm having the same problem after the update to 3.1.2. Cheers, |
@nickooolas I'm trying to reproduce that locally, let's see what I can find. bundle exec ./bin/shoryuken -r ./tmp/test.rb -q test -L ./tmp/shoryuken.log -P ./tmp/shoryuken.pid -d
top -pid $(cat tmp/shoryuken.pid) |
@nickooolas I couldn't reproduce it locally. I'm wondering if it may be conflicting with another gem or even workers. If I create a repo with my test project, would you be able to test it? I just want to make sure, if for the same project (same gems) you are still being able to reproduce it. @nickooolas @ydkn what is the version of the aws-sdk are you running? bundle list
aws-sdk-core (2.10.9) |
Hi @phstc - no worries happy to help out there, let me know once you've setup the test repo. In terms of aws-sdk - got the following |
Hi @nickooolas Awesome! Could you try this https://github.com/phstc/test-memory-leak-401? |
Hi @phstc I tested that out today - it looks like the memory leak is still occurring when using that project as well: In 6 minutes: Let me know what else you'd like to do to debug. cheers |
@nickooolas hm I couldn't reproduce it locally OS X. I kept it running for 1h, the memory went from 22M to 28M. What's the OS/MEM/CPU are you testing on? |
@ydkn would you mind share yours too? |
We had this happen on a project as well. 64bit Amazon Linux 2017.03, 8GB mem, 2CPU I've already rolled back to our previous 2.x version, but I'm happy to provide any other information you might need. |
Hi @phstc First of all I want to make clear that I'm currently not sure if the results below are really pointing into the right direction - please keep that in mind. My test setup was my existing app with polling 3 empty queues every 30 seconds. The long interval is to avoid dumping the memory during polling a queue. Using the following as the base for the debugging process: The files needed for running the same debugging are located here: So far the best result I was able to come up with is this:
The lines indicating a leak in application.rb can be ignored - those are generated from the debugging code itself. I'm not familiar with the inner workings of concurrent-ruby so I can't really make anything of it right now. Maybe someone else can interpret it. Cheers, |
I had the same issue too. I resolved it by using v3.0.9. The issue happens on the development macOS machine and on EC2. Shoryuken is started using -R option. The issue happens even with when I have no jobs defined at all. |
Thanks a lot for all of you, the guidance in the comments helped a ton. I managed to reproduce it by creating an Amazon Linux instance 🎉 and the good news, it's fixed in 3.1.5, could you guys give it a try? Basically, the dispatch loop was implemented through |
@phstc I'm running v3.1.5 since you pushed it and since the memory growth is virtually non-existence. I've deployed it to our staging environment and it will be used heavily the next few days. I'll let you know if we faced any issues. Thanks a lot for the great work. |
@phstc sure, I'll update you if we face any issues. |
Hey @phstc, thanks again for your work in building and maintaining Shoryuken, have been getting great use out of it!
After an unintended
bundle update
the other day, which bumpedshoryuken
from3.0.9
to3.1.2
, I noticed my workers failing on Heroku, and a quick look at the stats indicated a significant memory leak (10mb / minute).After ruling out any other gem updates being the cause of the increase, I rounded it down to simply the update to
3.1.2
. I did this by just reverting to an existing working branch with only theshoryuken
version changed in theGemfile.lock
from3.0.9
to3.1.2
, which cause the worker to leak memory quite quickly.I don't know too much around memory debugging, so can't provide any more info there, but would be happy to run any recommended debugging tools to get to the bottom of why this has started occurring in
3.1.2
.Cheers,
Nick
The text was updated successfully, but these errors were encountered: