use async.mapSeries for --slow installs #4553
Conversation
maybe a flag to set a concurrency limit would be sane, caolan/async has this baked in as mapLimit |
I was going to say, I feel like I've seen such a flag around the npm docs somewhere... |
I had a look at that, but because of the way that the EDIT: actually, that's incorrect. It'd be depth^n I think... Either way, not constant. There'd need to be some kind of a queue with a limited number of active jobs to make it a constant value. |
overall I think this is a valid concern for low power devices, e.g. there were issues around this when deploying on ninjablocks. |
This is a completely valid concern, but accepting this pull request is rather problematic. If we decide to do this, I'd like to do it in stages.
Then you can do something like |
Well as I mentioned in the opening sentence, I don't expect this change to actually be accepted. It's more the concept of the problem that I wanted to highlight; I'm not even sure my proposed solution (implementation aside) is the right one. EDIT: the reason it's a pull request and not an issue is that I wanted to "put my money where my mouth is" as it were. |
🎉 for "go completely hambone". This is a more fully-fleshed version of #6325, and as such I'm going to close that issue in favor of this one. I think @isaacs has a good summary of an approach we could follow, and I agree that npm's greed for resources makes it scary as a deployment tool, so we do need to do something about this. I personally would prefer to use promises over async (for reasons unrelated to this patch, but entirely related to how I've been spending almost all of my time over the last month – diagnosing and addressing race conditions in caching and installing), but I'm not dead set on that. I'd like to see something like this land soon, because getting rid of all the extraneous locking code has made npm even worse about sucking up all the RAM and bandwidth anywhere near it (although it's also much faster, so there's that). See also #6411, which is another PR biting off another piece of this. It would be great if we could address this problem in a unified way. |
+1; my particular problem is npm smashes my corp proxy and causes it to connection reset |
+1 |
@iarna is this something that we could incorporate into the installer? Does it belong in the caching layer? Maybe in |
What if, rather than attempting to apply something like async.mapSeries or async.mapLimit when making requests, npm made its calls to fetch remote resources through a proxy function that provided a pool of reusable instances of the direct fetch function, and just queued you until one of those was free? This isn't much different from the classic "six simultaneous connections, then you queue" request behavior really. You don't have to think about that, it just works (or doesn't work, if it's not what you wanted). It would seem a lot easier than trying to limit the concurrency of node's algorithms throughout. |
@boutell The way the caching and fetching layer is written prevents this from being a straightforward thing to implement. I'm also not sure it directly addresses the OP's problem without some form of serialization higher up the stack. npm is very Nodelike in its insistence that it DO EVERYTHING RIGHT NOW, and changing that requires some fairly careful redesign in the absence of hacks. |
See #6411 for my thoughts on this, which largely echo @isaacs's above. I would love to see npm become more intelligent about how it uses resources, but I think to do that properly requires it to be baked into npm at the architectural level, instead of done around the edges. This doesn't have to be part of some big huge reimplementation of the core – while I was moving a lot of stuff around when I redid npm's locking code, the actual changes to minimize the locking and inflighting and put it where it was actually necessary were pretty narrow and tailored. This feels like a sketch of a solution, not the solution itself (which I know what you were proposing, @doxxa, so all credit to you for framing it this way). I guess what I'd like to see is either something like what @isaacs proposes, or maybe a short document describing what resources npm uses, and how to properly limit them to make npm a better intranet / metered internet / CPU bus citizen, before actually putting things in code. Thanks for starting this discussion, and thanks for your time and work, @deoxxa. |
What's wrong with adding a concurrency parameter/flag/config? It's not This feature is really hard to handle through |
This is clearly a ridiculous pull request, and I don't expect it to be accepted. What I would like, however, is to call attention to the fact that during an install operation on a large project, the cpu and memory consumption of npm borders on obscene. I have several projects now that have too many dependencies (about 30 top-level items) to install properly in their production environments, because the environments are allocated according to the resource consumption of the projects themselves and not the installation process.
Rather than installing everything on another system, then moving the resulting files over to the destination system (which is basically what I'm doing now), I'd like it if I could tell npm to just cool it a little bit and maybe not go completely hambone trying to install everything it possibly can at once. This is a proof-of-concept of that idea, in that with this patch applied, and using the
--slow
install option introduced therein, I am able to perform a successful installation of the aforementioned projects in their production environments.With this patch applied, the install time for browserify (chosen as a commonly-used project with a lot of dependencies) jumps from about a minute to over 15 minutes, which is obviously unacceptable for most use cases. For my case, it's perfectly fine, because there is a delayed, staggered install process anyway across multiple nodes, and this is just some additional latency in that process getting started.