-
Notifications
You must be signed in to change notification settings - Fork 861
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
redsocks enters an inifite loop when RLIMIT_NOFILE is reached and a new client connection is attempted #19
Comments
Nice catch! Thanks, I'll try to fix that. |
By the way, counting file descriptors is bad solution for several reasons:
There are actually two issues here: I think, that exponential backoff after Yes, I'm aware about "Tuning TCP parameters for the 21st century" trend, I'm aware that tcp_syn_retries has nothing common with client-side, I just want to delegate responsibility for defaults on someone else. :) Comments? P.S. I wonder, have you found the bug in real-world situation or was it just a test case? |
libevent actually does use file descriptors internally, which cannot be accounted for in a reliable way, you're right. The rest of the inherited fd's should be closed as part of the daemonization process. Other than that, yes, it's not the best way to do things probably, although it might be simpler than exponential backoff. IMHO, you don't need to cap things by the exact RFC specification, just make sure that you're within a "safe" region (e.g. 5-10 seconds). I think that the best approach would be to throttle log messages in a different way than accept(), i.e. send out a "max open files reached" warning once and then again not before say 60 seconds have elapsed. Essentially, we care about the fact that the limit has been reached, i.e. we must raise the given limit, not that it was triggered by a given client connection. See for example the "MaxClients reached" warning that apache emits. To sum things up: any non-zero backoff in accept() will solve the busyloop issue, personally i'd use something capped by 1-2 seconds in order to guarantee some responsiveness. But IMHO log messages should be throttled independently. P.S.: I actually encountered this in a real-world situation, with a rather unorthodox use-case (using redsocks to tunnel hundreds of nagios NRPE checks over SOCKS for NAT traversal) and ended up with 12GB worth of logs within 6 hours. |
Regarding log throttling, are you writing log to syslog? What is your logging configuration? |
Yes, i'm using rsyslog. It supports per-process throttling, although not the version in Debian Squeeze. Versions of rsyslog before 5.7.1 only support log queues, which means that they actually delay dequeueing from /dev/log (or wherever), effectively blocking every process that tries to write something to syslog while a log storm is going on. So leaving log throttling up to the syslog implementation may cause side-effects. |
Hm... Can you copy-paste some lines of log?
|
It goes on like this, forever ;-)
Duplicate message reduction is a feature that must be explicitly enabled in rsyslog (at least under Debian). Even if it was enabled, it would still cause rsyslog to process the messages, although it wouldn't write them to disk. |
I've added backoff to master branch, can you test it? My logs looks like that:
I have two questions:
|
Hi Leonid, Just tested it, seems to work fine. Just two remarks:
Thanks for the quick response! P.S.: Will you be releasing a 0.4 version soon, or should I backport the bugfix and upload 0.3 to Debian? |
Hi Apollon, There is one issue with (2): oscillation around can-accept/cant-accept states under connection flood, also, redsocks need two file descriptors to handle TCP-proxied connection, accepting connection and closing it is not really graceful degradation. I'll think a bit more about (2) and will release 0.4 as soon as this bug will be closed. |
Ok, I've finally made up my mind.
I'll also set |
Hi Leonid, Your suggestion provides enough dampening and I think it's a good compromise between simplicity and responsiveness. As for min_accept_backoff, i'd go for 100ms, I prefer round numbers ;-) Thanks again for the quick response! |
Hi Apollon. Excuse me for late reply (after quick response, huh). I've commited I've tested it a bit and it seems to work, so I tagged it with release-0.4 I would be happy if you give it a try in really loaded environment - my tests were rather synthetic :) Leonid. |
Hi,
In its main loop, redsocks accept()s new connections and simply retries if accept() returns an error. However, this leads to an infinite loop in the following corner case:
This loop causes 100% CPU utilization and logfile flooding with redsocks[1187]: accept: Too many open files, until a client connection is closed.
There are a number of ways to solve this (e.g. check that accept() does not return EMFILE and throttle), although IMHO the best one would be to perform a getrlimit(RLIMIT_NOFILE) on startup and keep track of how many fd's are currently in use before accept()ing. setrlimit() support would also be nice by the way ;-)
Thanks,
Apollon
The text was updated successfully, but these errors were encountered: