You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Maybe this idea doesn't make sense and/or isn't practical, but as I understand it reading a file currently involves stat-ing it first in order to know how big it is, so that a buffer of the right size can be allocated for it.
I'd imagine most files being read with Node tend to be smaller than some fairly small threshold, let's say perhaps 1MB, so could it make sense to do the following instead?
Node pre-allocates a 1MB buffer, which is managed by a pool of reusable 1MB buffers.
The OS is being asked to read 1MB out of the target file for us.
Now if we get less than 1MB of data back we should implicitly know the size of the file already.
We now allocate a new buffer of the right size with that information and copy the data from the reusable buffer into this new one.
Basically we skipped an fstat call, at the expense of slightly larger peak memory usage, and a memcpy for less than 1MB of data, I guess.
If we instead get back exactly 1MB we can't implicitly know if the file ended right there or not, most likely it didn't.
If we wanted to still avoid doing stat calls we could keep asking for 1MB (or more) at a time until we get less data than that, and then concatenate the buffers. Otherwise if the concat call can get sort of arbitrarily expensive, or peak memory usage could get out of control, we could get mostly back on the old path and just perform a stat call, allocate a buffer of the right size, put the 1MB we already have in there, and read the rest.
I'd imagine in a benchmark where lots of small-ish files are read that should speed things up a bit, especially for very small files. And the most popular JS tools would basically fall ~entirely under this scenario (tsc, webpack, vite, prettier, eslint etc.)
Never doing a stat call may have unintended side effects though if for example by the time we read 1MB out of a file a new 1MB or more of data got appended to it, so maybe it's better to just pay the price of the stat call if get back exactly 1MB for the first 1MB we ask for.
Potentially also this may slow down the reading of files which just happen to be sized right above our threshold.
Potentially also something a bit more complicated could be done, for example if the past 3 files we read were very small we could then speculate that the 4th one will be small also, and/or we could speculate about how big it may be, to avoid slowing down scenarios where a lot of files sized above our threshold are read.
Thoughts?
The text was updated successfully, but these errors were encountered:
Maybe this idea doesn't make sense and/or isn't practical, but as I understand it reading a file currently involves
stat
-ing it first in order to know how big it is, so that a buffer of the right size can be allocated for it.I'd imagine most files being read with Node tend to be smaller than some fairly small threshold, let's say perhaps 1MB, so could it make sense to do the following instead?
fstat
call, at the expense of slightly larger peak memory usage, and amemcpy
for less than 1MB of data, I guess.I'd imagine in a benchmark where lots of small-ish files are read that should speed things up a bit, especially for very small files. And the most popular JS tools would basically fall ~entirely under this scenario (tsc, webpack, vite, prettier, eslint etc.)
Never doing a stat call may have unintended side effects though if for example by the time we read 1MB out of a file a new 1MB or more of data got appended to it, so maybe it's better to just pay the price of the stat call if get back exactly 1MB for the first 1MB we ask for.
Potentially also this may slow down the reading of files which just happen to be sized right above our threshold.
Potentially also something a bit more complicated could be done, for example if the past 3 files we read were very small we could then speculate that the 4th one will be small also, and/or we could speculate about how big it may be, to avoid slowing down scenarios where a lot of files sized above our threshold are read.
Thoughts?
The text was updated successfully, but these errors were encountered: