Make compatible with node 0.10.0 streams interface #19

jkingyens opened this Issue Mar 11, 2013 · 10 comments


None yet
5 participants

Node 0.10.0 uses the new streams API (which also seems to include support for duplex streams). Would be great to get support for node 0.10.x+


juliangruber commented Mar 11, 2013

I thought the new stream api is backwards compatible?

Raynos commented Mar 12, 2013

@juliangruber not forward compatible.

You cant .read() an old style stream.

@enosfeedler pipe is pipe. If you need the "finish" event for WritableStream's we can ask @dominictarr to emit "finish" on through

What part of this is not compatible with 0.10 streams though?

@Raynos what does finish mean?

Raynos commented Mar 12, 2013

@dominictarr finish means you have called .end() on me and I had some data in my internal buffer I still had to flush. I have finished writing that data.

Raynos commented Mar 12, 2013

The use case is doing something like

function handleRequest(req, res) {
  req.pipe(request("/some-uri")).on("finish", function () {

So it means that the writable side is done? I don't want to know about buffers. I just want to know it's safe to continue.

If there is no buffered data, it still emits 'finish', right?

For a through stream, 'finish' should be the same thing as 'close'. (not necessarily the same as 'close' for a duplex, though)

Raynos commented Mar 12, 2013

@dominictarr it means the writable side is done

@Raynos you do not explain in sufficient detail or refer to enough questions that I asked you for that to be a useful answer!

Raynos commented Mar 12, 2013


Given any Writable stream w, it should emit "finish" after end() has been called and it no longer has any state, any outstanding io to do and will not trigger a side effect in the future

Specifically for the through case. Once queue(null) has been called you can assume it's safe to emit "finish". Because the through contract is that once a chunk get's written to the stream, it's the responsibility of the write handler to transform it either synchronously or asynchronously and then queue(...) one or more values.

Once end get's called the through stream will either just queue(null) because there is no more data to move through the stream. Or it will handle waiting for all the asynchronous transformations and queue(null) eventually. As far as I understand it's the users of through's responsibility to not queue(null) early and emit "finish" too early.

now it should be noted that if the through stream is in a paused state and we have called end() and the user has called queue(null) then the Writable side of the through stream is finished. It's job was to transform the input being .write() to it and to eventually queue all the values and null. The Writable side SHOULD NOT have to wait for the Readable side to be resumed and for the internal buffer in through (the one for correct pause/resume buffering) to be emptied.

This means that "finish" can be emitted before "end"

isaacs commented Mar 12, 2013

If you have an old style stream, you can do this: var r = new Readable(); r.wrap(oldStream);

If you have a new stream, you can treat it like an old stream, and it'll function mostly the same once you start listening for 'data' (or call resume()) to start the flow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment