Fetching contributors…
Cannot retrieve contributors at this time
74 lines (54 sloc) 3.47 KB
Some things that you can tune to have optimum performance
in general:
- Increase value inside /proc/sys/net/core/somaxconn to have larger
connection backlog queue (try 10000, this is what lhttpd will use)
On FreeBSD thats 'sysctl -w kern.ipc.somaxconn=10000'
- have the logfile's on a different HDD than the web-root
- If firewall policy allows, mark HTTP traffic as not-conntracked:
iptables -t raw -A PREROUTING -p tcp --dport 80 -j NOTRACK
iptables -t raw -A OUTPUT -p tcp --sport 80 -j NOTRACK
or in the new shape:
iptables -t raw -A PREROUTING -p tcp --dport 80 -j CT --notrack
iptables -t raw -A OUTPUT -p tcp --sport 80 -j CT --notrack
This really buys you connection performance.
- Whenever possible use the "mmap" log provider, it really rocks.
(it requires the log-files to be MB aligned, so you may need to remove
them first if you previously didn't use the mmap provider)
- split large directories (dirs containing like 100k files) into smaller ones
if you use auto-indexing, as most web browsers cannot render large tables fastly
- If you have large bandwidth and a lot of connections, consider to increase your
TCP send buffers via /proc/sys/net/ipv4/tcp_wmem (min, default, max)
- for a large BDP, consider using tcp-westwood or other congestion avoidance algorithms
(modprobe tcp_weswood and tune /proc/sys/net/ipv4/tcp_*)
- You can experiment with lophttpd's send-size scheduling algorithms by
passing "-s algo" switch. This allows to choose between different scheduling
strategies if a lot of clients are handled simulteanously:
o none (default)
Usually there is no need to change anything. If you have good uplink, you
can serve 25k or more clients on a single core on a commodity PC. Todays
CPU and NIC speed dont really limit the amount of clients anymore. However
if you experiance connection drops for really large amount of clients, you
can try the algorithms below. Keep in mind that most drops are not caused by
lophttpd or the OS, but by overloaded network hardware like switches or routers
which cannot handle the constant high-load.
o suspend
This will remove the client from the POLLOUT list for the time it still
has data inside the TCP send buffer from the last send operation
o minimize
This will decrease the chunksize of the next segment to a minimum, so that
the TCP send-queue fills slower. Only happens if there is still data
inside the TCP send buffer (as above).
o static (deprecated)
This computes the size of the next data segment with some formula which depends
on how many clients are actually handled. Does not depend on whether there is
still data in the TCP send-buffer (hence 'static'). Because of the dumb 'static'
computation, this does not really adjust itself and is for testing only.
- The default number of maximum allowed parallel connections is set to 10,000 (per core).
If you experience some DoS which tries to exceed that, you can always increase it via -N,
so that it would be hard to reach from DSL up-links. However if just the attackers
up-link is large enough, he can DoS any web server. Thats known and should be prevented
on the border router. If you have smaller files to deliver (some KB) its hard to DoS you,
if you deliver MB sized files, its easier, as even legit connections last longer and
keep the -N parameter high.
In such cases you should consider traffic shaping for dedicated IP networks.
Netfilter is your friend.