-
Notifications
You must be signed in to change notification settings - Fork 63
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix memory leak in cmd parser #404
Conversation
lib/iobroker.c
Outdated
/* add sane max limit, if ulimit is set to unlimited or a very high value we | ||
* don't want to waste memory for nothing */ | ||
if (iobs->max_fds > 100000) | ||
iobs->max_fds = 100000; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we make this configurable just in case someone wants a different value?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i thought about it as well. It is usually limited by ulimit -n
. So if you want a smaller value, set a sane ulimit.
Not sure if a config option is required, maybe a compile time constant?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I guess it is mainly for larger values on would nee to change it.
There shouldn't be much need to increase it above this limit I suppose, but a configuration option somewhere (compiler/runtime) seems nicer than a "magic" constant.
Usually ulimit -n is has sane values somewhere between 1000 and 10000. Some container managers like docker set higher values, so naemon might end up with ulimit of millions which just wastes memory. If a single worker hits the open file limit, simply start more worker.
f41db3d
to
a80eccf
Compare
i removed the magic... :-) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks!
Fixing two issues here. If you like separate PRs, just tell me :-)
First one fixes a memory leak in the cmd parser which makes the worker processes grow which
then leads to more cpu usage during all the forking (along with the wasted memory)
The second came to my attention while searching the leak, those two allocations where by far the biggest ones on my test system and both simply depend on the maximum number of open files.
I added a high enough max value to be hopefully future proof without wasting useless memory.
I don't think any naemon worker will ever sanely use more than 100k open file handles. I
considered adding a log entry or another warning. But at that point, there is no access to the log and stderr/stdout is already closed.
With those patches, memory consumtion seems stable.