-
Notifications
You must be signed in to change notification settings - Fork 737
High Load
Precaution 1: 3proxy was not initially developed for high load and is positioned as a SOHO product, the main reason is "one connection - one thread" model 3proxy uses. 3proxy is known to work with above 200,000 connections under proper configuration, but use it in production environment under high loads at your own risk and do not expect too much.
Precaution 2: This documentation is incomplete and is not sufficient. High loads may require very specific system tuning including, but not limited to specific or cusomized kernels, builds, settings, sysctls, options, etc. All this is not covered by this documentation.
A number of simulatineous connections per service is limited by 'maxconn' option. Default maxconn value since 3proxy 0.8 is 500. You may want to set 'maxconn' to higher value. Under this configuration:maxconn 1000 proxy -p3129 proxy -p3128 socksmaxconn for every service is 1000, and there are 3 services running (2 proxy and 1 socks), so, for all services there can be up to 3000 simulatineous connections to 3proxy.
Avoid setting 'maxconn' to arbitrary high value, it should be carefully choosen to protect system and proxy from resources exhaution. Setting maxconn above resources available can lead to denial of service conditions.
Each running service require:- 1*thread (process)
- 1*socket (file descriptor)
- 1 stack memory segment + some heap memory, ~64K-128K depending on the system
- 1*thread (process)
- 2*socket (file descriptor). For FTP 4 sockets are required.
1 additional socket (file descriptor) during name resolution for non-cached names
1 additional socket during authentication or logging for RADIUS authentication or logging. - 1*ephemeral port (3*ephemeral ports for FTP connection).
- 1 stack memory segment of ~32K-128K depending on the system + at least 16K and up to few MB (for 'proxy' and 'ftppr') of heap memory. If you are short of memory, prefer 'socks' to 'proxy' and 'ftppr'.
system "ulimit -Ha >>/tmp/3proxy.ulim.hard" system "ulimit -Sa >>/tmp/3proxy.ulim.soft"in the beginning (before first service started) and the end of config file. Make both hard restart (that is kill and start 3proxy process) and soft restart by sending SIGUSR1 to 3proxy process, check ulimits recorded to files match your expecation. Check manuals / documentation for your system limitations. You may need to change sysctls or even rebuild the kernel from source. To help with system-dependant settings, since 0.9-devel 3proxy supports different socket options which can be set via -ol option for listening socket, -oc for proxy-to-client socket and -os for proxy-to-server socket. Example:
proxy -olSO_REUSEADDR,SO_REUSEPORT -ocTCP_TIMESTAMPS,TCP_NODELAY -osTCP_NODELAYavailable options are system dependant. Check ephemeral port range for your system and extend it to the number of the ports required. Ephimeral range is always limited to maximum number of ports (64K). To extend the number of outgoing connections above this limit, extending ephemeral port range is not enough, you need additional actions:
- Configure multiple outgoing IPs
- Make sure 3proxy is configured to use different outgoing IP by either setting
external IP via RADIUS
radius secret 1.2.3.4 auth radius proxy
or by using multiple services with different external interfaces, example:allow user1,user11,user111 proxy -p1001 -e1.1.1.1 flush allow user2,user22,user222 proxy -p1001 -e1.1.1.2 flush allow user3,user33,user333 proxy -p1001 -e1.1.1.3 flush allow user4,user44,user444 proxy -p1001 -e1.1.1.4 flush
or via "parent extip" rotation, e.g.allow user1,user11,user111 parent 1000 extip 1.1.1.1 0 allow user2,user22,user222 parent 1000 extip 1.1.1.2 0 allow user3,user33,user333 parent 1000 extip 1.1.1.3 0 allow user4,user44,user444 parent 1000 extip 1.1.1.4 0 proxy
orallow * parent 250 extip 1.1.1.1 0 parent 250 extip 1.1.1.2 0 parent 250 extip 1.1.1.3 0 parent 250 extip 1.1.1.4 0 socks
Under latest Linux version you can also start multiple services with different external addresses on the single port with SO_REUSEPORT on listening socket to evenly distribute incoming connections between outgoing interfaces:socks -olSO_REUSEPORT -p3128 -e 1.1.1.1 socks -olSO_REUSEPORT -p3128 -e 1.1.1.2 socks -olSO_REUSEPORT -p3128 -e 1.1.1.3 socks -olSO_REUSEPORT -p3128 -e 1.1.1.4
for Web browsing last two examples are not recommended, because same client can get different external address for different requests, you should choose external interface with user-based rules instead. - You may need additional system dependant actions to use same port on different IPs,
usually by adding SO_REUSEADDR (SO_PORT_SCALABILITY for Windows) socket option to
external socket. This option can be set (since 0.9 devel) with -os option:
proxy -p3128 -e1.2.3.4 -osSO_REUSEADDR
Behavior for SO_REUSEADDR and SO_REUSEPOR is different between different system, even between different kernel versions and can lead to unexpected results. Specifics is described here. Use this options only if actually required and if you fully understand possible consiquences. E.g. SO_REUSEPORT can help to establish more connections than the number of the client port available, but it can also lead to situation connections are randomely fail due to ip+port pairs collision if remote or local system doesn't support this trick.
For 32-bit systems address space can be a bottlneck you should consider. If you're short of address space you can try to use negative stack size.
There are known race condition issues in Linux / glibc resolver. The probability of race condition arises under configuration with IPv6, large number of interfaces or IP addresses or resolvers configured. In this case, install local recursor and use 3proxy built-in resolver (nserver / nscache / nscache6). Public resolvers like ones from Google have ratelimits. For large number of requests install local caching recursor (ISC bind named, PowerDNS recursor, etc). Currently, 3proxy is not optimized to use large ACLs, user lists, etc. All lists are processed lineary. In devel version you can use RADIUS authentication to avoid user lists and ACLs in 3proxy itself. Also, RADIUS allows to easily set outgoing IP on per-user basis or more sophisicated logics. RADIUS is a new beta feature, test it before using in production. Every configuration reload requires additional resources. Do not do frequent changes, like users addition/deletaion via connfiguration, use alternative authentication methods instead, like RADIUS. 'force' behaviour (default) re-authenticates all connections after configuration reload, it may be resource consuming on large number of connections. Consider adding 'noforce' command before services started to prevent connections reauthentication. Using configuration file directly in 'monitor' can lead to race condition where configuration is reloaded while file is being written. To avoid race conditions:- Update config files only if there is no lock file
- Create lock file then 3proxy configuration is updated, e.g. with "touch /some/path/3proxy/3proxy.lck". If you generate config files asynchronously, e.g. by user's request via web, you should consider implementing existance checking and file creation as atomic operation.
- add
system "rm /some/path/3proxy/3proxy.lck"
at the end of config file to remove it after configuration is successfully loaded - Use a dedicated version file to monitor, e.g.
monitor "/some/path/3proxy/3proxy.ver"
- After config is updated, change version file for 3proxy to reload configuration, e.g. with "touch /some/path/3proxy/3proxy.ver".
proxy -osTCP_NODELAY -ocTCP_NODELAYsets TCP_NODELAY for client (oc) and server (os) connections.
Do not use TCP_NODELAY on slow connections with high delays and then connection bandwidth is a bottleneck.
slice() allows to copy data between connections without copying to process addres space. It can speedup proxy on high bandwidth connections, if most connections require large data transfers. "-s" allows slice usage. Example:proxy -sSlice is only available in Linux and is currently beta option available in devel version. Do not use it in production without testing. Slice requires more system buffers, but reduces process memory usage. Do not use slice if there is a lot of short-living connections with no bandwidth requirements.
Use slice only on high-speed connections (e.g. 10GBE), if processor or bus are bottlenecks.
TCP_NODELAY and slice are not contrary to each over and can be combined on high-speed connections.