New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pipe #1
Comments
There is no pipe. Of course if you wanted to string together duplexes (i made
I do not encourage pipe or stringing together duplexes. I encourage using higher order functions that wrap a readable. var source = ...
var stream2 = map(source, ...)
var stream3 = filter(source, ...)
stream3(destination) |
what have you got against pipe? it's just a form of composition. |
hmm... with a push stream... you pass the destination to the source... but with a pull stream, I made a thing thing that gives you a pull interface to a strm, https://github.com/dominictarr/strm/blob/master/index.js#L151-L172 and it ends up working like that. most, other programming enviroments have streaming classes, |
@dominictarr the problem with pipe is that it encourages multiple duplex streams. This adds complexity and means that each intermediate stream may need to handle buffering if the stream before it does not respect back pressure. This is unnecessarily complex. @dominictarr I like your idea of changing the api for writable to be |
@dominictarr by the added complexity I mean compare your implementation of Mine has no buffering. Mine does not care whether the mapAsync callback get's run serially one chunk or whether the async callback get's called in "parallel" for multiple chunks at once. This is because the recursive style pull stream API leaves all back pressure and flow of data upto the writable and not to some pipe implementation that does "some default thing" |
you can have simple duplex streams that support buffering but don't implement it. https://github.com/Raynos/recurse-stream/blob/master/map.js Is exactly what is happening there. when the map is called, This is my favorite feature of reducibles. |
only async duplex streams need buffering. |
@dominictarr correct. async duplex streams are complex. I have avoided that complexity by not doing duplex. Which means no pipe. I have written a trivial async map by just wrapping a readable and every time the writable calls The thing about recurse-stream is that there is no pipe. Every writable can choose to move data from a readable into itself by whatever strategy it wants. it has I will need to think about end, error and close more. |
oh, btw I have a new, simpler async-map implementation https://github.com/dominictarr/strm/blob/master/index.js#L242-L251 |
@dominictarr that's not simpler. that just uses three abstractions :) |
well, it's cleaner than the first implementation, but I'm beginning to think you may have a point with pull-streams. |
for the complex/lazy stuff, I've basically reverted to a pull stream API anyway, so, a push stream makes back pressure a little more complex, but everything else simpler. |
readable(pipe(console.log))
function pipe(write) {
return function writable(chunk, recurse) {
write(chunk)
recurse()
}
} Now we have a simple function like As for pumping stuff out of a loop 20 lines vs 10 lines. Even if you add the size of readstream its 16 vs 20. I don't think pumping stuff out out as a loop is hard. it's just a recursive loop instead of an iterative loop. I do agree that you can't do iterative loops with this pull stream which is weird and will make |
@Raynos is there a way you could obliterate duplex -- and just make it into another readable stream? I'm okay with it if it's
It's gotta be consistent. Then, all you need for left-to-right pipeability is a curry. pull-streams shouldn't have |
Also, the benefit of iterative streams is that you don't get stackoverflows, with cbs you have to use nextTick even in situations that don't really require it. I suspect that might cause through-put performance problems -- but I'll have to benchmark that. |
@dominictarr I'm thinking I want some kind of function customAdd(stream, number) {
return function readable(writable) {
stream(function (chunk, recurse) {
(chunk === null || chunk instanceOf Error) ?
writable(chunk) :
writable(chunk + number, recurse)
})
}
}
chain(readable)
.take(5)
.map(function (i) { return i * 2 })
.curry(customAdd, 2)
(writable) @dominictarr I agree pull streams don't have writables they have readables and readers. I just call @dominictarr as for stackoverflows I just want to put tail call optimization / trampoling in |
What if duplex is actually
then you can chain them together What if map was function map(lambda) {
return function duplex(stream) {
return function readable(writable) {
stream(function (chunk, recurse) {
(chunk === null || chunk instanceof Error) ?
writable(chunk) :
writable(lambda(chunk), recurse)
})
}
}
}
(map(function (i) { return i * 2 })
(take(5)
(source)))
(writable) Of course this is lisp style chaining so it will still need sugar. |
this module is deprecated. Look at simple or pull streams instead. |
How does pipe work with
recurse-stream
?is there a consistent api?
I'm seeing a lot of
but then map is like
The text was updated successfully, but these errors were encountered: