-
-
Notifications
You must be signed in to change notification settings - Fork 2k
ThreadError: can't create Thread: Resource temporarily unavailable #4367
Comments
Please post your |
## here is my Gemfile
## and here is result of bundle env `(app:2.1)plncikande@iix-ssd [~/app]# bundle env
Bundler settings
Gemfile
Gemfile.lock
## and here is the error message when I type "bundle install"
|
Haven't been able to reproduce this on Linux or a Mac machine yet. Could you try updating rubygems to the latest stable and seeing if that affects anything? |
@RochesterinNYC && @hendranata I have been doing some reading in other issues that reference the |
I can confirm/reproduce this error. I upgraded to Edit: Tried it with Rails
|
@omninonsense yes my problem actually occur when using cloudlinux server. and after create a support ticket to them, they had solved the problem in the easy way..quite easy. in this case my server is running under cpanel /whm and go to LVM manager, increase the Physical memory for each user and also change the other settings like PMEM, NPROC & EP.. |
@hendranata Ah, thanks for pointing that out. If I remember correctly, the user in question didn't have these limits, however I will double check. I suppose this isn't really a Bundler issue in itself, though. |
nproc limits are also very common on EL6/7, which defaults to a 1k soft limit for non-root users. This can be easily changed in /etc/security/limits.d/90-nproc.conf, or by removing the file. Our CI environment quickly hit this limit with Bundler 1.12.0 when running a few jobs in parallel. Edit: correction, EL6 has a 1k limit and EL7 has a 4k limit by default. EL7's config file is 20-nproc.conf. |
Bundler 1.12.1 - I can confirm that this bug is real and is still present. In 1.11.2 bug is not present
relogin into shell and then run bundle install
bundler really should not hit such high limit as 70 nproc. Removing such limits leaves system open for fork bomb attacks. Please make bundler use not more that 4 nproc. --jobs 1 does not make bug go away. Even if it did that switch is not enough and there would be a need to take this parameter from ENV variable - to allow defining a default value by sysadmin. |
Fix for this is probably to make https://github.com/bundler/bundler/blob/506f7d42a31f5fde26cba320e4a534ae51715c47/lib/bundler/worker.rb#L28 handle |
If you're running cPanel on the Cloudlinux server, disable shell form bomb protection in cPanel |
I can confirm this issue in another hosted environment. Numproc limits seem to be prominent on lower-end virtual rootservers. My limit (I'd need to upgrade my package to raise it) is at 320 processes, and the system idles at around 80 processes. During an installation of OpenProject, This seems to be a really high number. Is it possible that this problem is rooted somewhere else, maybe the use of rbenv, older ruby versions etc? |
I'll throw in another confirmation. Doesn't happen on my Mac, but when I tried to deploy (using Capistrano) to Webfaction shared hosting I get the aformentioned error. Thanks for your work on this! |
Ruby 2.3.1, installed using rbenv, if that helps. |
I can also confirm that downgrading to 1.11.2 "fixes" the issue. Low and even draconian limits on resources are very common in VMs and shared hosting. Of all the things I run in my VM, bundler should not be the thing that pushes the limits :) |
A PR improving this would be greatly appreciated :) |
Aside from a PR fixing this, a Docker image or run container configuration, vagrant box, or other way to reproduce this issue with minimal dependencies/reliance on external cloud services or etc. would be great! |
https://github.com/will-in-wi/bundler-bug-reproduction This Vagrant configuration uses the real-world limits.conf from my WebFaction host. Instructions to reproduce in the readme. |
On my Bluehost hosting, I had this problem. doing |
Confirmed this is broken on my OpenSuse tumbleweed desktop OS. Not a VM. gem uninstall bundler && gem install bundler -v '< 1.12' fixed it for me. |
i have been able to successfully reproduce this in Vagrant thanks to @will-in-wi. Looking into https://github.com/bundler/bundler/blob/7ae072865e3fc23d9844322dde6ad0f6906e3f2c/lib/bundler/worker.rb it's obvious that we're not closing the threads (though i have not figured out if we can/should). As a consequence this is causing limits.conf to prevent ruby from creating any more [threads]. Maybe we can make this more resourceful? something i would like to look into. Here is an example
and my limits.conf file
|
Heh, @colby-swandale I was doing the same thing at the same time. #4607 is a PR which should fix the issue. Could you test? Thanks all! |
@will-in-wi awesome work! This is looking much better 😃
|
@joshfleming you save my day :D |
Clean up worker threads once done with them Right now, the thread pools created by CompactIndex are not cleaned up once they are done. I assume that over time, they would be garbage collected, but in the meantime there could be 200+ threads running. Many shared hosts have fork bomb protection set up which kills Bundler. This patch will clean up the threads as soon as they are done, keeping the total number of active threads at any one time to a minimum. Fixes #4367
Clean up worker threads once done with them Right now, the thread pools created by CompactIndex are not cleaned up once they are done. I assume that over time, they would be garbage collected, but in the meantime there could be 200+ threads running. Many shared hosts have fork bomb protection set up which kills Bundler. This patch will clean up the threads as soon as they are done, keeping the total number of active threads at any one time to a minimum. Fixes #4367
The fix was just released in Bundler 1.12.5, so you can use it by running |
on rvm and ruby 2.2.4 and bundler-1.13.1 problem is alive. debian jessie |
@stalker37 please open a new issue |
This is my enviroment, and i'm facing the same problem mentioned above on "Debian GNU/Linux 8 (jessie)" Bundler 1.13.7
|
@azamouchi: Could you post |
Amount of memory : i'm surprised to see 0kB free memory ! do you think that this could be the problem ?
And the /etc/security/limits.conf content
|
Yeah, that'd be my first guess. Something on your system is taking up all of your memory, and leaving none for Bundler to run. There is only 2G of total memory, which is somewhat small for a lot of use cases. You can use Top or PS to find out what is using all of the memory. |
@will-in-wi Ok, i'll check that. Thank you for your help 👍 |
@will-in-wi using top i can see that i have zero free ram memory, but there is a 2Go of swap memory that can be used in case of need, do you mean that even this 2Go memory is not enough to install the bundle ? i know that swap memory is slower, but at least it can handle the install process no ?
|
This gets deeper into memory management in Linux than I am knowledgeable about. TL;DR: I don't know. But I'll give my guess for you to have a hopefully better place to start from. My basic understanding is that swap is used such that when memory gets low, memory pages are sent to the swap storage based on which pages are Least Recently Used. In this case, I would expect then that for the purposes of Bundler, you have the full 4G of memory available, albeit getting super slow. However, if the issue is related to this ticket, then it would be increased memory usage due to forking. When a process forks, the heap is shared, but the stack is duplicated. I'm not sure whether between Ruby and Linux, this happens using a Copy on Write mechanism which would minimise memory usage, or just a full copy, which would potentially increase memory usage. I'm really fuzzy here, so please verify what I'm saying. It is possible that Bundler is using more than 2G of memory during a run. You might want to run it, and watch top while it runs. Another piece of this is that I'm not sure memory is the only way |
I face a strange error.
but u can see my screenshot below
when i did "bundle"
it was showing an error
ThreadError: can't create Thread: Resource temporarily unavailable
http://s16.postimg.org/kyoo484gl/errorrrr.png
in this case, i suspect that the problem actually comming from memory, but I have check the free memory is around 2gb and swap 3gb..
but dont know why the error is still coming up.
anybody help me..
thanks
The text was updated successfully, but these errors were encountered: