Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

push or pull? #15

dominictarr opened this Issue Sep 10, 2012 · 0 comments


None yet
2 participants

The core idea of this module is to make streaming a pull operation, rather than a push operation.
However, the current implementation of pipe (specifically, the internal function flow) mungs it back into a push operation.

Consider the metaphor of physical pipes carrying water. Naively you can get the water to move my increasing the pressure at one end, like a garden hose, or by decreasing the pressure at the other end - like with a drinking straw.

You could consider the old stream api to be a "SquirtStream" -- it is the pressure at the readable end that starts pushing data through the pipeline, and I think in spirit, I think the new interface wants to be a "SuckStream" - it is the reader that pulls data through.

Currently, ReadableStream#pipe is in a weird middle ground.

When piped, each segment is forced to accept data until it return false.
i.e. piping causes each individual segment to start pulling.
This is not completely unreasonable, after all, that is how your esophagus and your veins work.
So, this is like a "SwallowStream".

so what I propose, is allow the writable stream to implement flow. (one could default to dest.flow || flow of course)
then, it would be possible to construct lazy pipelines where source.read() is never called until dest.read() is called


readable.read() is never called until through2.read() is called.

Currently, one chunk is read from readable, but through.write(data)===false
then that chunk is drained when through2 reads, so through reads again.
so readable has read two chunks, when actually neither of those reads was necessary yet.

indeed, no reads should be performed until through2.pipe(writeable)

But what if it could work more like this:

Through.prototype.read = function () {
  return this._source.read()

Also, it would enable writable streams to make use of the read(bytes) api, and read only 4 bytes, for example,
while keeping the composeable api of a pipable stream!

I'm not sure exactly how this would best be implemented, but do I feel that a SuckStream is much more conceptually simple than a SwallowStream.

@rvagg rvagg closed this Dec 31, 2014

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment