I don't actually know if readable streams support backpressure in this way, but the incoming data stream is not paused and resumed to allow backpressure. According to google, Readable.toWeb applies backpressure properly, so this only applies to the fallback. This would result in older versions reading the entire file into memory quickly and then slowly trickling it out over TCP.
const createStreamBody = (stream) => {
if (useReadableToWeb) return Readable.toWeb(stream);
return new ReadableStream({
start(controller) {
stream.on("data", (chunk) => {
controller.enqueue(chunk);
});
stream.on("error", (err) => {
controller.error(err);
});
stream.on("end", () => {
controller.close();
});
},
cancel() {
stream.destroy();
}
});
};
I don't actually know if readable streams support backpressure in this way, but the incoming data stream is not paused and resumed to allow backpressure. According to google, Readable.toWeb applies backpressure properly, so this only applies to the fallback. This would result in older versions reading the entire file into memory quickly and then slowly trickling it out over TCP.