We have completely disabled pooling in our application (or at least we think we did by passing pool:false) when calling request. Now, however, we're trying to understand how it works because we're seeing an odd behavior.
Basically, we use request to fetch a lot of different files (from different hosts mostly) in our "fetcher". We have set our fetcher so that it does not fetch more than X files at any given time (it will start fetching the next when one is done).
We also measure how much time is spend between an item enters our fetcher and exits it (after its being fetched).
We've noted that when we increase the X number to say 2X, it looks like we're not fetching more items per second. Similarly, we observed that the process time for each item increases. This leads us to think that even though we have set pool to false, request is still throttling our fetcher. Could this be? How can we check that?
Also, if I understand correctly, pooling will just keep a fixed (maxSocket) number of sockets and request will try to find the best one to use (using the host information) if one is already open. Is that the case?
If not, can someone explain better?
actually, pooling controls the agent passed to core. each agent holds all hosts and throttles the maxSockets per host. setting to false should disable pooling/agent entirely. if it's not, there is a bug. test against your own localhost server, it's possible that the remote host only allows you one connection per IP, in which case you'll actually get the best performance setting to an agent pool of 1 maxSocket.
Thanks Mikeal for the explaination. I checked locally and it does seems that everything works as per your description. This probably means that some publishers do actually limit our connections to 1.