Whenever a server would listen to TCP connections but never accept them, the lhttpc application would leak a ton of processes until the virtual machine is taken down. This was due to the way setting up connections was being done within the load balancer. This would lead to many milliseconds of delay for each socket connection attempt, and an eventual queue build-up would happen in the load balancer. Because requests freely spawn processes, this ended up having too many requests that the LB cannot deal with. This fix changes the structure around so that each client is responsible of setting their own socket and connection, enabling the load balancer to easily deny connections to newer processes when older ones are still stuck. Setting a good request timeout can then insure that slow requests won't starve the system.
pointed this out).