New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leak with Ruby 2.1? #1421

Closed
mperham opened this Issue Jan 9, 2014 · 10 comments

Comments

6 participants
@mperham
Owner

mperham commented Jan 9, 2014

Several people have reported that Sidekiq processes with Ruby 2.1 appear to have a memory leak.

If anyone can provide more detail (ideally using the new ObjectSpace APIs provided in 2.1) that narrows down the cause, you'd be a hero in my eyes.

@skwp

This comment has been minimized.

Show comment
Hide comment
@skwp

skwp Jan 9, 2014

Thanks Mike. I intend to come back and investigate this at a later date, unfortunately right now I have some other priorities. I can tell you the behavior appeared that the sidekiq process was growing by leaps and bounds. We could easily eat a gig of ram within 2-3 hours of running in production.

Incidentally, I was looking for good resources on using the objectspace APIs to see if anyone had done good profiling tools that might help. Is there a good place to tap into sidekiq maybe before a worker is executed and after? Basically adding some hooks before and after each action that can dump total memory usage and/or objectspace info. I think that would be helpful. I'm also concerned about doing something like that in production where the overhead of objectspace dumps can impact performance. Perhaps we can run one extra worker box that works in slow mode dumping info.

skwp commented Jan 9, 2014

Thanks Mike. I intend to come back and investigate this at a later date, unfortunately right now I have some other priorities. I can tell you the behavior appeared that the sidekiq process was growing by leaps and bounds. We could easily eat a gig of ram within 2-3 hours of running in production.

Incidentally, I was looking for good resources on using the objectspace APIs to see if anyone had done good profiling tools that might help. Is there a good place to tap into sidekiq maybe before a worker is executed and after? Basically adding some hooks before and after each action that can dump total memory usage and/or objectspace info. I think that would be helpful. I'm also concerned about doing something like that in production where the overhead of objectspace dumps can impact performance. Perhaps we can run one extra worker box that works in slow mode dumping info.

@mperham

This comment has been minimized.

Show comment
Hide comment
@mperham

mperham Jan 9, 2014

Owner

How timely. ❤️

Owner

mperham commented Jan 9, 2014

How timely. ❤️

@skwp

This comment has been minimized.

Show comment
Hide comment
@skwp

skwp Jan 9, 2014

Amazing. I love the internet.

skwp commented Jan 9, 2014

Amazing. I love the internet.

@krasnoukhov

This comment has been minimized.

Show comment
Hide comment
@krasnoukhov

krasnoukhov Jan 9, 2014

Contributor

Hey @mperham, thanks for sharing my post! I hope it will be helpful for someone.

As far as I learnt from some time spent with profiler there is no actual leak in Sidekiq itself. I'm not exactly sure, but there was nothing suspicious in reports generated from heap dump.

Also I was experimenting with long-running workers on Ruby 2.1. My production setup includes periodical job that quiets sidekiq workers and rolls new ones. I've temporary disabled it on one of machines and let workers run for ~3 days. Each process has grown to ~1GB peak and then remained in this position, so I concluded that there are no actual leak, it's just a bloat that caused by some memory-intensive jobs.

Anyway, memory usage has definitely increased comparing to 2.0, for my case I was forced to decrease number of Sidekiq and Puma processes in order to calm down OOM killer.

Contributor

krasnoukhov commented Jan 9, 2014

Hey @mperham, thanks for sharing my post! I hope it will be helpful for someone.

As far as I learnt from some time spent with profiler there is no actual leak in Sidekiq itself. I'm not exactly sure, but there was nothing suspicious in reports generated from heap dump.

Also I was experimenting with long-running workers on Ruby 2.1. My production setup includes periodical job that quiets sidekiq workers and rolls new ones. I've temporary disabled it on one of machines and let workers run for ~3 days. Each process has grown to ~1GB peak and then remained in this position, so I concluded that there are no actual leak, it's just a bloat that caused by some memory-intensive jobs.

Anyway, memory usage has definitely increased comparing to 2.0, for my case I was forced to decrease number of Sidekiq and Puma processes in order to calm down OOM killer.

@mperham

This comment has been minimized.

Show comment
Hide comment
@mperham

mperham Jan 9, 2014

Owner

Thanks for sharing @krasnoukhov, much appreciated! 🤘

Owner

mperham commented Jan 9, 2014

Thanks for sharing @krasnoukhov, much appreciated! 🤘

@nathany

This comment has been minimized.

Show comment
Hide comment
@nathany

nathany Jan 30, 2014

We've also noticed an increase in memory consumption with Ruby 2.1 (not specific to Sidekiq):
https://discussion.heroku.com/t/tuning-rgengc-2-1-on-heroku/359/6

nathany commented Jan 30, 2014

We've also noticed an increase in memory consumption with Ruby 2.1 (not specific to Sidekiq):
https://discussion.heroku.com/t/tuning-rgengc-2-1-on-heroku/359/6

@mperham

This comment has been minimized.

Show comment
Hide comment
@mperham

mperham Feb 8, 2014

Owner

According to @wycats, this is due to a change in 2.1's memory configuration. The 2.1 GC's old generation defaults to 128MB which means 2.1 processes take a lot more memory initially. This can lead to memory errors on Heroku and other memory constrained platforms.

That discussion above shows how to tune the GC to reduce the memory footprint.

Owner

mperham commented Feb 8, 2014

According to @wycats, this is due to a change in 2.1's memory configuration. The 2.1 GC's old generation defaults to 128MB which means 2.1 processes take a lot more memory initially. This can lead to memory errors on Heroku and other memory constrained platforms.

That discussion above shows how to tune the GC to reduce the memory footprint.

@maxdemarzi

This comment has been minimized.

Show comment
Hide comment
@maxdemarzi

maxdemarzi Mar 28, 2014

Where What Count
GEM:neography-1.4.1/lib/neography/config.rb Hash 2
GEM:neography-1.4.1/lib/neography/connection.rb String 41050
GEM:neography-1.4.1/lib/neography/connection.rb Array 5666
GEM:neography-1.4.1/lib/neography/connection.rb Time 2830
GEM:neography-1.4.1/lib/neography/connection.rb Hash 1430
GEM:neography-1.4.1/lib/neography/connection.rb HTTPClient 2
GEM:neography-1.4.1/lib/neography/connection.rb Logger 2
GEM:neography-1.4.1/lib/neography/node.rb String 94484
GEM:neography-1.4.1/lib/neography/node.rb Array 440
GEM:neography-1.4.1/lib/neography/node.rb Neography::Node 440
GEM:neography-1.4.1/lib/neography/property_container.rb String 4840
GEM:neography-1.4.1/lib/neography/property_container.rb Array 4800
GEM:neography-1.4.1/lib/neography/property_container.rb Hash 440
GEM:neography-1.4.1/lib/neography/property.rb Proc 8720
GEM:neography-1.4.1/lib/neography/property.rb String 4805
GEM:neography-1.4.1/lib/neography/property.rb RubyVM::Env 2180
GEM:neography-1.4.1/lib/neography/property.rb Array 438
GEM:neography-1.4.1/lib/neography/rest.rb Neography::Connection 2
GEM:neography-1.4.1/lib/neography/rest.rb Hash 2
GEM:neography-1.4.1/lib/neography/rest.rb String 2
GEM:neography-1.4.1/lib/neography/rest/cypher.rb Hash 4
GEM:neography-1.4.1/lib/neography/rest/cypher.rb String 1
GEM:neography-1.4.1/lib/neography/rest/extensions.rb Hash 2
GEM:neography-1.4.1/lib/neography/rest/helpers.rb String 7063
GEM:neography-1.4.1/lib/neography/rest/helpers.rb Regexp 1
GEM:neography-1.4.1/lib/neography/rest/helpers.rb Hash 1
GEM:neography-1.4.1/lib/neography/rest/node_indexes.rb String 2824
GEM:neography-1.4.1/lib/neography/rest/node_indexes.rb Hash 1412

maxdemarzi commented Mar 28, 2014

Where What Count
GEM:neography-1.4.1/lib/neography/config.rb Hash 2
GEM:neography-1.4.1/lib/neography/connection.rb String 41050
GEM:neography-1.4.1/lib/neography/connection.rb Array 5666
GEM:neography-1.4.1/lib/neography/connection.rb Time 2830
GEM:neography-1.4.1/lib/neography/connection.rb Hash 1430
GEM:neography-1.4.1/lib/neography/connection.rb HTTPClient 2
GEM:neography-1.4.1/lib/neography/connection.rb Logger 2
GEM:neography-1.4.1/lib/neography/node.rb String 94484
GEM:neography-1.4.1/lib/neography/node.rb Array 440
GEM:neography-1.4.1/lib/neography/node.rb Neography::Node 440
GEM:neography-1.4.1/lib/neography/property_container.rb String 4840
GEM:neography-1.4.1/lib/neography/property_container.rb Array 4800
GEM:neography-1.4.1/lib/neography/property_container.rb Hash 440
GEM:neography-1.4.1/lib/neography/property.rb Proc 8720
GEM:neography-1.4.1/lib/neography/property.rb String 4805
GEM:neography-1.4.1/lib/neography/property.rb RubyVM::Env 2180
GEM:neography-1.4.1/lib/neography/property.rb Array 438
GEM:neography-1.4.1/lib/neography/rest.rb Neography::Connection 2
GEM:neography-1.4.1/lib/neography/rest.rb Hash 2
GEM:neography-1.4.1/lib/neography/rest.rb String 2
GEM:neography-1.4.1/lib/neography/rest/cypher.rb Hash 4
GEM:neography-1.4.1/lib/neography/rest/cypher.rb String 1
GEM:neography-1.4.1/lib/neography/rest/extensions.rb Hash 2
GEM:neography-1.4.1/lib/neography/rest/helpers.rb String 7063
GEM:neography-1.4.1/lib/neography/rest/helpers.rb Regexp 1
GEM:neography-1.4.1/lib/neography/rest/helpers.rb Hash 1
GEM:neography-1.4.1/lib/neography/rest/node_indexes.rb String 2824
GEM:neography-1.4.1/lib/neography/rest/node_indexes.rb Hash 1412
@tilo

This comment has been minimized.

Show comment
Hide comment

tilo commented Feb 4, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment