Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Browse files

doc: Pull in docs from node v0.10.0-pre

  • Loading branch information...
commit ce225aa7c4d6d2efb47e6bacd79d8395a6d295d6 1 parent bcbe9a7
@isaacs isaacs authored
Showing with 589 additions and 326 deletions.
  1. +589 −326 README.md
View
915 README.md
@@ -1,483 +1,666 @@
# readable-stream
- Stability: 1 - Experimental
+A new class of streams for Node.js
-A new kind of readable streams for Node.js
+This module provides the new Stream base classes introduced in Node
+v0.10, for use in Node v0.8. You can use it to have programs that
+have to work with node v0.8, while being forward-compatible for v0.10
+and beyond. When you drop support for v0.8, you can remove this
+module, and only use the native streams.
-This is an abstract class designed to be extended. It also provides a
-`wrap` method that you can use to provide the simpler readable API for
-streams that have the "readable stream" interface of Node 0.8 and
-before.
+This is almost exactly the same codebase as appears in Node v0.10.
+However:
-Note that Duplex, Transform, Writable, and PassThrough streams are also
-provided as base classes. See the full API details below.
+1. The exported object is actually the Readable class. Decorating the
+ native `stream` module would be global pollution.
+2. In v0.10, you can safely use `base64` as an argument to
+ `setEncoding` in Readable streams. However, in v0.8, the
+ StringDecoder class has no `end()` method, which is problematic for
+ Base64. So, don't use that, because it'll break and be weird.
-## Justification
+Other than that, the API is the same as `require('stream')` in v0.10,
+so the API docs are reproduced below.
-<!-- misc -->
+----------
-Writable streams in node are relatively straightforward to use and
-extend. The `write` method either returns `false` if you would like
-the user to back off a bit, in which case a `drain` event at some
-point in the future will let them continue writing, or anything other
-than false if the bytes could be completely handled and another
-`write` should be performed, or The `end()` method lets the user
-indicate that no more bytes will be written. That's pretty much the
-entire required interface for writing.
+ Stability: 2 - Unstable
-However, readable streams in Node 0.8 and before are rather
-complicated.
+A stream is an abstract interface implemented by various objects in
+Node. For example a request to an HTTP server is a stream, as is
+stdout. Streams are readable, writable, or both. All streams are
+instances of [EventEmitter][]
-1. The `data` events start coming right away, no matter what. There
- is no way to do other actions before consuming data, without
- handling buffering yourself.
-2. If you extend the interface in userland programs, then you must
- implement `pause()` and `resume()` methods, and take care of
- buffering yourself.
-3. In many streams, `pause()` was purely advisory, so **even while
- paused**, you still have to be careful that you might get some
- data. This caused a lot of subtle b ugs.
+You can load the Stream base classes by doing `require('stream')`.
+There are base classes provided for Readable streams, Writable
+streams, Duplex streams, and Transform streams.
-So, while writers only have to implement `write()`, `end()`, and
-`drain`, readers have to implement (at minimum):
+## Compatibility
-* `pause()` method
-* `resume()` method
-* `data` event
-* `end` event
+In earlier versions of Node, the Readable stream interface was
+simpler, but also less powerful and less useful.
-And read consumers had to always be prepared for their backpressure
-advice to simply be ignored.
+* Rather than waiting for you to call the `read()` method, `'data'`
+ events would start emitting immediately. If you needed to do some
+ I/O to decide how to handle data, then you had to store the chunks
+ in some kind of buffer so that they would not be lost.
+* The `pause()` method was advisory, rather than guaranteed. This
+ meant that you still had to be prepared to receive `'data'` events
+ even when the stream was in a paused state.
-If you are using a readable stream, and want to just get the first 10
-bytes, make a decision, and then pass the rest off to somewhere else,
-then you have to handle buffering, pausing, slicing, and so on. This
-is all rather brittle and easy to get wrong for all but the most
-trivial use cases.
+In Node v0.10, the Readable class described below was added. For
+backwards compatibility with older Node programs, Readable streams
+switch into "old mode" when a `'data'` event handler is added, or when
+the `pause()` or `resume()` methods are called. The effect is that,
+even if you are not using the new `read()` method and `'readable'`
+event, you no longer have to worry about losing `'data'` chunks.
-Additionally, this all made the `reader.pipe(writer)` method
-unnecessarily complicated and difficult to extend without breaking
-something. Backpressure and error handling is especially challenging
-and brittle.
+Most programs will continue to function normally. However, this
+introduces an edge case in the following conditions:
-### Solution
+* No `'data'` event handler is added.
+* The `pause()` and `resume()` methods are never called.
-<!-- misc -->
+For example, consider the following code:
-The reader does not have pause/resume methods. If you want to consume
-the bytes, you call `read()`. If bytes are not being consumed, then
-effectively the stream is in a paused state. It exerts backpressure
-on upstream connections, doesn't read from files, etc. Any data that
-was already in the process of being read will be placed in a buffer.
+```javascript
+// WARNING! BROKEN!
+net.createServer(function(socket) {
-If `read()` returns `null`, then a future `readable` event will be
-fired when there are more bytes ready to be consumed.
+ // we add an 'end' method, but never consume the data
+ socket.on('end', function() {
+ // It will never get here.
+ socket.end('I got your message (but didnt read it)\n');
+ });
-This is simpler and conceptually closer to the underlying mechanisms.
-The resulting `pipe()` method is much shorter and simpler. The
-problems of data events happening while paused are alleviated.
+}).listen(1337);
+```
-### Compatibility
+In versions of node prior to v0.10, the incoming message data would be
+simply discarded. However, in Node v0.10 and beyond, the socket will
+remain paused forever.
-<!-- misc -->
+The workaround in this situation is to call the `resume()` method to
+trigger "old mode" behavior:
-It's not particularly difficult to wrap old-style streams in this
-new interface, or to wrap this type of stream in the old-style
-interface.
+```javascript
+// Workaround
+net.createServer(function(socket) {
-The `Readable` class provides a `wrap(oldStream)` method that takes an
-argument which is an old-style stream with `data` events and `pause()`
-and `resume()` methods, and uses that as the data source. For
-example:
+ socket.on('end', function() {
+ socket.end('I got your message (but didnt read it)\n');
+ });
-```javascript
-var r = new Readable();
-r.wrap(oldReadableStream);
+ // start the flow of data, discarding it.
+ socket.resume();
-// now you can use r.read(), and it will emit 'readable' events
-// but the data is based on whatever oldReadableStream spits out of
-// its 'data' events.
+}).listen(1337);
```
-In order to work with programs that use the old interface, some
-magic is unfortunately required. At some point in the future, this
-magic will be removed.
+In addition to new Readable streams switching into old-mode, pre-v0.10
+style streams can be wrapped in a Readable class using the `wrap()`
+method.
+
+## Class: stream.Readable
+
+<!--type=class-->
+
+A `Readable Stream` has the following methods, members, and events.
+
+Note that `stream.Readable` is an abstract class designed to be
+extended with an underlying implementation of the `_read(size)`
+method. (See below.)
+
+### new stream.Readable([options])
+
+* `options` {Object}
+ * `highWaterMark` {Number} The maximum number of bytes to store in
+ the internal buffer before ceasing to read from the underlying
+ resource. Default=16kb
+ * `encoding` {String} If specified, then buffers will be decoded to
+ strings using the specified encoding. Default=null
+ * `objectMode` {Boolean} Whether this stream should behave
+ as a stream of objects. Meaning that stream.read(n) returns
+ a single value instead of a Buffer of size n
+
+In classes that extend the Readable class, make sure to call the
+constructor so that the buffering settings can be properly
+initialized.
+
+### readable.\_read(size)
+
+* `size` {Number} Number of bytes to read asynchronously
+
+Note: **This function should NOT be called directly.** It should be
+implemented by child classes, and called by the internal Readable
+class methods only.
+
+All Readable stream implementations must provide a `_read` method
+to fetch data from the underlying resource.
+
+This method is prefixed with an underscore because it is internal to
+the class that defines it, and should not be called directly by user
+programs. However, you **are** expected to override this method in
+your own extension classes.
+
+When data is available, put it into the read queue by calling
+`readable.push(chunk)`. If `push` returns false, then you should stop
+reading. When `_read` is called again, you should start pushing more
+data.
+
+The `size` argument is advisory. Implementations where a "read" is a
+single call that returns data can use this to know how much data to
+fetch. Implementations where that is not relevant, such as TCP or
+TLS, may ignore this argument, and simply provide data whenever it
+becomes available. There is no need, for example to "wait" until
+`size` bytes are available before calling `stream.push(chunk)`.
-The `Readable` class will automatically convert into an old-style
-`data`-emitting stream if any listeners are added to the `data` event.
-So, this works fine, though you of course lose a lot of the benefits of
-the new interface:
+### readable.push(chunk)
+
+* `chunk` {Buffer | null | String} Chunk of data to push into the read queue
+* return {Boolean} Whether or not more pushes should be performed
+
+Note: **This function should be called by Readable implementors, NOT
+by consumers of Readable subclasses.** The `_read()` function will not
+be called again until at least one `push(chunk)` call is made. If no
+data is available, then you MAY call `push('')` (an empty string) to
+allow a future `_read` call, without adding any data to the queue.
+
+The `Readable` class works by putting data into a read queue to be
+pulled out later by calling the `read()` method when the `'readable'`
+event fires.
+
+The `push()` method will explicitly insert some data into the read
+queue. If it is called with `null` then it will signal the end of the
+data.
+
+In some cases, you may be wrapping a lower-level source which has some
+sort of pause/resume mechanism, and a data callback. In those cases,
+you could wrap the low-level source object by doing something like
+this:
```javascript
-var r = new ReadableThing();
+// source is an object with readStop() and readStart() methods,
+// and an `ondata` member that gets called when it has data, and
+// an `onend` member that gets called when the data is over.
-r.on('data', function(chunk) {
- // ...
- // magic is happening! oh no! the animals are walking upright!
- // the brooms are sweeping the floors all by themselves!
-});
+var stream = new Readable();
-// this will also turn on magic-mode:
-r.pause();
+source.ondata = function(chunk) {
+ // if push() returns false, then we need to stop reading from source
+ if (!stream.push(chunk))
+ source.readStop();
+};
-// now pause, resume, etc. are patched into place, and r will
-// continually call read() until it returns null, emitting the
-// returned chunks in 'data' events.
+source.onend = function() {
+ stream.push(null);
+};
-r.on('end', function() {
- // ...
-});
+// _read will be called when the stream wants to pull more data in
+// the advisory size argument is ignored in this case.
+stream._read = function(n) {
+ source.readStart();
+};
```
-## Class: Readable
+### readable.unshift(chunk)
-A base class for implementing Readable streams. Override the
-`_read(n,cb)` method to fetch data asynchronously and take advantage
-of the buffering built into the Readable class.
+* `chunk` {Buffer | null | String} Chunk of data to unshift onto the read queue
+* return {Boolean} Whether or not more pushes should be performed
-### Example
+This is the corollary of `readable.push(chunk)`. Rather than putting
+the data at the *end* of the read queue, it puts it at the *front* of
+the read queue.
-Extend the Readable class, and provide a `_read(n,cb)` implementation
-method.
+This is useful in certain use-cases where a stream is being consumed
+by a parser, which needs to "un-consume" some data that it has
+optimistically pulled out of the source.
```javascript
-var Readable = require('readable-stream');
-var util = require('util');
+// A parser for a simple data protocol.
+// The "header" is a JSON object, followed by 2 \n characters, and
+// then a message body.
+//
+// Note: This can be done more simply as a Transform stream. See below.
-util.inherits(MyReadable, Readable);
+function SimpleProtocol(source, options) {
+ if (!(this instanceof SimpleProtocol))
+ return new SimpleProtocol(options);
-function MyReadable(options) {
Readable.call(this, options);
+ this._inBody = false;
+ this._sawFirstCr = false;
+
+ // source is a readable stream, such as a socket or file
+ this._source = source;
+
+ var self = this;
+ source.on('end', function() {
+ self.push(null);
+ });
+
+ // give it a kick whenever the source is readable
+ // read(0) will not consume any bytes
+ source.on('readable', function() {
+ self.read(0);
+ });
+
+ this._rawHeader = [];
+ this.header = null;
}
-MyReadable.prototype._read = function(n, cb) {
- // your magic goes here.
- // call the cb at some time in the future with either up to n bytes,
- // or an error, like cb(err, resultData)
- //
- // The code in the Readable class will call this to keep an internal
- // buffer at a healthy level, as the user calls var chunk=stream.read(n)
- // to consume chunks.
+SimpleProtocol.prototype = Object.create(
+ Readable.prototype, { constructor: { value: SimpleProtocol }});
+
+SimpleProtocol.prototype._read = function(n) {
+ if (!this._inBody) {
+ var chunk = this._source.read();
+
+ // if the source doesn't have data, we don't have data yet.
+ if (chunk === null)
+ return this.push('');
+
+ // check if the chunk has a \n\n
+ var split = -1;
+ for (var i = 0; i < chunk.length; i++) {
+ if (chunk[i] === 10) { // '\n'
+ if (this._sawFirstCr) {
+ split = i;
+ break;
+ } else {
+ this._sawFirstCr = true;
+ }
+ } else {
+ this._sawFirstCr = false;
+ }
+ }
+
+ if (split === -1) {
+ // still waiting for the \n\n
+ // stash the chunk, and try again.
+ this._rawHeader.push(chunk);
+ this.push('');
+ } else {
+ this._inBody = true;
+ var h = chunk.slice(0, split);
+ this._rawHeader.push(h);
+ var header = Buffer.concat(this._rawHeader).toString();
+ try {
+ this.header = JSON.parse(header);
+ } catch (er) {
+ this.emit('error', new Error('invalid simple protocol data'));
+ return;
+ }
+ // now, because we got some extra data, unshift the rest
+ // back into the read queue so that our consumer will see it.
+ var b = chunk.slice(split);
+ this.unshift(b);
+
+ // and let them know that we are done parsing the header.
+ this.emit('header', this.header);
+ }
+ } else {
+ // from there on, just provide the data to our consumer.
+ // careful not to push(null), since that would indicate EOF.
+ var chunk = this._source.read();
+ if (chunk) this.push(chunk);
+ }
};
-var r = new MyReadable();
+// Usage:
+var parser = new SimpleProtocol(source);
+// Now parser is a readable stream that will emit 'header'
+// with the parsed header data.
+```
-r.on('end', function() {
- // no more bytes will be provided.
-});
+### readable.wrap(stream)
-r.on('readable', function() {
- // now is the time to call read() again.
-});
+* `stream` {Stream} An "old style" readable stream
-// to get some bytes out of it:
-var data = r.read(optionalLengthArgument);
-// now data is either null, or a buffer of optionalLengthArgument
-// length. If you don't provide an argument, then it returns whatever
-// it has.
-
-// typically you'd just r.pipe() into some writable stream, but you
-// can of course do stuff like this, as well:
-function flow() {
- var chunk;
- while (null !== (chunk = r.read())) {
- doSomethingWithData(chunk);
- }
- r.once('readable', flow);
-}
-flow();
+If you are using an older Node library that emits `'data'` events and
+has a `pause()` method that is advisory only, then you can use the
+`wrap()` method to create a Readable stream that uses the old stream
+as its data source.
+
+For example:
+
+```javascript
+var OldReader = require('./old-api-module.js').OldReader;
+var oreader = new OldReader;
+var Readable = require('stream').Readable;
+var myReader = new Readable().wrap(oreader);
+
+myReader.on('readable', function() {
+ myReader.read(); // etc.
+});
```
-### new Readable(options)
+### Event: 'readable'
-* `options` {Object}
- * `lowWaterMark` {Number} The minimum number of bytes before the
- stream is considered 'readable'. Default = `0`
- * `bufferSize` {Number} The number of bytes to try to read from the
- underlying `_read` function. Default = `16 * 1024`
+When there is data ready to be consumed, this event will fire.
-Make sure to call the `Readable` constructor in your extension
-classes, or else the stream will not be properly initialized.
+When this event emits, call the `read()` method to consume the data.
-### readable.read([size])
+### Event: 'end'
-* `size` {Number} Optional number of bytes to read. If not provided,
- then return however many bytes are available.
-* Returns: {Buffer | null}
+Emitted when the stream has received an EOF (FIN in TCP terminology).
+Indicates that no more `'data'` events will happen. If the stream is
+also writable, it may be possible to continue writing.
-Pulls the requested number of bytes out of the internal buffer. If
-that many bytes are not available, then it returns `null`.
+### Event: 'data'
-### readable.\_read(size, callback)
+The `'data'` event emits either a `Buffer` (by default) or a string if
+`setEncoding()` was used.
-* `size` {Number} Number of bytes to read from the underlying
- asynchronous data source.
-* `callback` {Function} Callback function
- * `error` {Error Object}
- * `data` {Buffer | null}
+Note that adding a `'data'` event listener will switch the Readable
+stream into "old mode", where data is emitted as soon as it is
+available, rather than waiting for you to call `read()` to consume it.
-**Note: This function is not implemented in the Readable base class.**
-Rather, it is up to you to implement `_read` in your extension
-classes.
+### Event: 'error'
-`_read` should fetch the specified number of bytes, and call the
-provided callback with `cb(error, data)`, where `error` is any error
-encountered, and `data` is the returned data.
+Emitted if there was an error receiving data.
-This method is prefixed with an underscore because it is internal to
-the class that defines it, and should not be called directly by user
-programs. However, you **are** expected to override this method in
-your own extension classes.
+### Event: 'close'
-The `size` argument is purely advisory. You may call the callback
-with more or fewer bytes. However, if you call the callback with
-`null`, or an empty buffer, then it will assume that the end of the
-data was reached.
+Emitted when the underlying resource (for example, the backing file
+descriptor) has been closed. Not all streams will emit this.
-### readable.pipe(destination)
+### readable.setEncoding(encoding)
-* `destination` {Writable Stream object}
+Makes the `'data'` event emit a string instead of a `Buffer`. `encoding`
+can be `'utf8'`, `'utf16le'` (`'ucs2'`), `'ascii'`, or `'hex'`.
-Continually `read()` data out of the readable stream, and `write()` it
-into the writable stream. When the `writable.write(chunk)` call
-returns `false`, then it will back off until the next `drain` event,
-to do backpressure.
+The encoding can also be set by specifying an `encoding` field to the
+constructor.
-Piping to multiple destinations is supported. The slowest destination
-stream will limit the speed of the `pipe()` flow.
+### readable.read([size])
-Note that this puts the readable stream into a state where not very
-much can be done with it. You can no longer `read()` from the stream
-in other code, without upsetting the pipe() process. However, since
-multiple pipe destinations are supported, you can always create a
-`PassThrough` stream, and pipe the reader to that. For example:
+* `size` {Number | null} Optional number of bytes to read.
+* Return: {Buffer | String | null}
-```
-var r = new ReadableWhatever();
-var pt = new PassThrough();
+Note: **This function SHOULD be called by Readable stream users.**
-r.pipe(someWritableThing);
-r.pipe(pt);
+Call this method to consume data once the `'readable'` event is
+emitted.
-// now I can call pt.read() to my heart's content.
-// note that if I *don't* call pt.read(), then it'll back up and
-// prevent the pipe() from flowing!
-```
+The `size` argument will set a minimum number of bytes that you are
+interested in. If not set, then the entire content of the internal
+buffer is returned.
-### readable.unpipe([destination])
+If there is no data to consume, or if there are fewer bytes in the
+internal buffer than the `size` argument, then `null` is returned, and
+a future `'readable'` event will be emitted when more is available.
-* `destination` {Writable Stream object} Optional
+Calling `stream.read(0)` will always return `null`, and will trigger a
+refresh of the internal buffer, but otherwise be a no-op.
-Remove the provided `destination` stream from the pipe flow. If no
-argument is provided, then it will unhook all piped destinations.
+### readable.pipe(destination, [options])
-### readable.on('readable')
+* `destination` {Writable Stream}
+* `options` {Object} Optional
+ * `end` {Boolean} Default=true
-An event that signals more data is now available to be read from the
-stream. Emitted when more data arrives, after previously calling
-`read()` and getting a null result.
+Connects this readable stream to `destination` WriteStream. Incoming
+data on this stream gets written to `destination`. Properly manages
+back-pressure so that a slow destination will not be overwhelmed by a
+fast readable stream.
-### readable.on('end')
+This function returns the `destination` stream.
-An event that signals that no more data will ever be available on this
-stream. It's over.
+For example, emulating the Unix `cat` command:
-### readable.\_readableState
+ process.stdin.pipe(process.stdout);
-* {Object}
+By default `end()` is called on the destination when the source stream
+emits `end`, so that `destination` is no longer writable. Pass `{ end:
+false }` as `options` to keep the destination stream open.
-An object that tracks the state of the stream. Buffer information,
-whether or not it has reached the end of the underlying data source,
-etc., are all tracked on this object.
+This keeps `writer` open so that "Goodbye" can be written at the
+end.
-You are strongly encouraged not to modify this in any way, but it is
-often useful to read from.
+ reader.pipe(writer, { end: false });
+ reader.on("end", function() {
+ writer.end("Goodbye\n");
+ });
-## Class: Writable
+Note that `process.stderr` and `process.stdout` are never closed until
+the process exits, regardless of the specified options.
-A base class for creating Writable streams. Similar to Readable, you
-can create child classes by overriding the asynchronous
-`_write(chunk,cb)` method, and it will take care of buffering,
-backpressure, and so on.
+### readable.unpipe([destination])
-### new Writable(options)
+* `destination` {Writable Stream} Optional
-* `options` {Object}
- * `highWaterMark` {Number} The number of bytes to store up before it
- starts returning `false` from write() calls. Default = `16 * 1024`
- * `lowWaterMark` {Number} The number of bytes that the buffer must
- get down to before it emits `drain`. Default = `1024`
+Undo a previously established `pipe()`. If no destination is
+provided, then all previously established pipes are removed.
+
+### readable.pause()
+
+Switches the readable stream into "old mode", where data is emitted
+using a `'data'` event rather than being buffered for consumption via
+the `read()` method.
+
+Ceases the flow of data. No `'data'` events are emitted while the
+stream is in a paused state.
+
+### readable.resume()
-Make sure to call the `Writable` constructor in your extension
-classes, or else the stream will not be properly initialized.
+Switches the readable stream into "old mode", where data is emitted
+using a `'data'` event rather than being buffered for consumption via
+the `read()` method.
-### writable.write(chunk, [encoding])
+Resumes the incoming `'data'` events after a `pause()`.
-* `chunk` {Buffer | String}
-* `encoding` {String} An encoding argument to turn the string chunk
- into a buffer. Only relevant if `chunk` is a string.
- Default = `'utf8'`.
-* Returns `false` if you should not write until the next `drain`
- event, or some other value otherwise.
-The basic write function.
+## Class: stream.Writable
-### writable.\_write(chunk, callback)
+<!--type=class-->
-* `chunk` {Buffer}
-* `callback` {Function}
- * `error` {Error | null} Call with an error object as the first
- argument to indicate that the write() failed for unfixable
- reasons.
+A `Writable` Stream has the following methods, members, and events.
+
+Note that `stream.Writable` is an abstract class designed to be
+extended with an underlying implementation of the
+`_write(chunk, encoding, cb)` method. (See below.)
+
+### new stream.Writable([options])
+
+* `options` {Object}
+ * `highWaterMark` {Number} Buffer level when `write()` starts
+ returning false. Default=16kb
+ * `decodeStrings` {Boolean} Whether or not to decode strings into
+ Buffers before passing them to `_write()`. Default=true
+
+In classes that extend the Writable class, make sure to call the
+constructor so that the buffering settings can be properly
+initialized.
+
+### writable.\_write(chunk, encoding, callback)
+
+* `chunk` {Buffer | String} The chunk to be written. Will always
+ be a buffer unless the `decodeStrings` option was set to `false`.
+* `encoding` {String} If the chunk is a string, then this is the
+ encoding type. Ignore chunk is a buffer. Note that chunk will
+ **always** be a buffer unless the `decodeStrings` option is
+ explicitly set to `false`.
+* `callback` {Function} Call this function (optionally with an error
+ argument) when you are done processing the supplied chunk.
-**Note: This function is not implemented in the Writable base class.**
-Rather, it is up to you to implement `_write` in your extension
-classes.
+All Writable stream implementations must provide a `_write` method to
+send data to the underlying resource.
-`_write` should do whatever has to be done in this specific Writable
-class, to handle the bytes being written. Write to a file, send along
-a socket, encrypt as an mp3, whatever needs to be done. Do your I/O
-asynchronously, and call the callback when it's complete.
+Note: **This function MUST NOT be called directly.** It should be
+implemented by child classes, and called by the internal Writable
+class methods only.
+
+Call the callback using the standard `callback(error)` pattern to
+signal that the write completed successfully or with an error.
+
+If the `decodeStrings` flag is set in the constructor options, then
+`chunk` may be a string rather than a Buffer, and `encoding` will
+indicate the sort of string that it is. This is to support
+implementations that have an optimized handling for certain string
+data encodings. If you do not explicitly set the `decodeStrings`
+option to `false`, then you can safely ignore the `encoding` argument,
+and assume that `chunk` will always be a Buffer.
This method is prefixed with an underscore because it is internal to
the class that defines it, and should not be called directly by user
programs. However, you **are** expected to override this method in
your own extension classes.
-### writable.end([chunk], [encoding])
-* `chunk` {Buffer | String}
-* `encoding` {String}
+### writable.write(chunk, [encoding], [callback])
-If a chunk (and, optionally, an encoding) are provided, then that
-chunk is first passed to `this.write(chunk, encoding)`.
+* `chunk` {Buffer | String} Data to be written
+* `encoding` {String} Optional. If `chunk` is a string, then encoding
+ defaults to `'utf8'`
+* `callback` {Function} Optional. Called when this chunk is
+ successfully written.
+* Returns {Boolean}
-This method is a way to signal to the writable stream that you will
-not be writing any more data. It should be called exactly once for
-every writable stream.
+Writes `chunk` to the stream. Returns `true` if the data has been
+flushed to the underlying resource. Returns `false` to indicate that
+the buffer is full, and the data will be sent out in the future. The
+`'drain'` event will indicate when the buffer is empty again.
-Calling `write()` *after* calling `end()` will trigger an error.
+The specifics of when `write()` will return false, is determined by
+the `highWaterMark` option provided to the constructor.
-### writable.on('pipe', source)
+### writable.end([chunk], [encoding], [callback])
-Emitted when calling `source.pipe(writable)`. See above for the
-description of the `readable.pipe()` method.
+* `chunk` {Buffer | String} Optional final data to be written
+* `encoding` {String} Optional. If `chunk` is a string, then encoding
+ defaults to `'utf8'`
+* `callback` {Function} Optional. Called when the final chunk is
+ successfully written.
-### writable.on('unpipe', source)
+Call this method to signal the end of the data being written to the
+stream.
-Emitted when calling `source.unpipe(writable)`. See above for the
-description of the `readable.unpipe()` method.
+### Event: 'drain'
-### writable.on('drain')
+Emitted when the stream's write queue empties and it's safe to write
+without buffering again. Listen for it when `stream.write()` returns
+`false`.
-If a call to `writable.write()` returns false, then at some point in
-the future, this event will tell you to start writing again.
+### Event: 'close'
-### writable.on('finish')
+Emitted when the underlying resource (for example, the backing file
+descriptor) has been closed. Not all streams will emit this.
-When the stream has been ended, and all the data in its internal
-buffer has been consumed, then it emits a `finish` event to let you
-know that it's completely done.
+### Event: 'finish'
-This is particularly handy if you want to know when it is safe to shut
-down a socket or close a file descriptor. At this time, the writable
-stream may be safely disposed. Its mission in life has been
-accomplished.
+When `end()` is called and there are no more chunks to write, this
+event is emitted.
-## Class: Duplex
+### Event: 'pipe'
-A base class for Duplex streams (ie, streams that are both readable
-and writable).
+* `source` {Readable Stream}
-Since JS doesn't have multiple prototypal inheritance, this class
-prototypally inherits from Readable, and then parasitically from
-Writable. It is thus up to the user to implement both the lowlevel
-`_read(n,cb)` method as well as the lowlevel `_write(chunk,cb)`
-method on extension duplex classes.
+Emitted when the stream is passed to a readable stream's pipe method.
-For cases where the written data is transformed into the output, it
-may be simpler to use the `Transform` class instead.
+### Event 'unpipe'
-### new Duplex(options)
+* `source` {Readable Stream}
-* `options` {Object} Passed to both the Writable and Readable
- constructors.
+Emitted when a previously established `pipe()` is removed using the
+source Readable stream's `unpipe()` method.
+
+## Class: stream.Duplex
+
+<!--type=class-->
-Make sure to call the `Duplex` constructor in your extension
-classes, or else the stream will not be properly initialized.
+A "duplex" stream is one that is both Readable and Writable, such as a
+TCP socket connection.
-If `options.allowHalfOpen` is set to the value `false`, then the
-stream will automatically end the readable side when the writable
-side ends, and vice versa.
+Note that `stream.Duplex` is an abstract class designed to be
+extended with an underlying implementation of the `_read(size)`
+and `_write(chunk, encoding, callback)` methods as you would with a Readable or
+Writable stream class.
-### duplex.allowHalfOpen
+Since JavaScript doesn't have multiple prototypal inheritance, this
+class prototypally inherits from Readable, and then parasitically from
+Writable. It is thus up to the user to implement both the lowlevel
+`_read(n)` method as well as the lowlevel `_write(chunk, encoding, cb)` method
+on extension duplex classes.
-* {Boolean} Default = `true`
+### new stream.Duplex(options)
-Set this flag to either `true` or `false` to determine whether or not
-to automatically close the writable side when the readable side ends,
-and vice versa.
+* `options` {Object} Passed to both Writable and Readable
+ constructors. Also has the following fields:
+ * `allowHalfOpen` {Boolean} Default=true. If set to `false`, then
+ the stream will automatically end the readable side when the
+ writable side ends and vice versa.
+In classes that extend the Duplex class, make sure to call the
+constructor so that the buffering settings can be properly
+initialized.
-## Class: Transform
+## Class: stream.Transform
-A duplex (ie, both readable and writable) stream that is designed to
-make it easy to implement transform operations such as encryption,
-decryption, compression, and so on.
+A "transform" stream is a duplex stream where the output is causally
+connected in some way to the input, such as a zlib stream or a crypto
+stream.
-Transform streams are `instanceof` Readable, but they have all of the
-methods and properties of both Readable and Writable streams. See
-above for the list of events and methods that Transform inherits from
-Writable and Readable.
+There is no requirement that the output be the same size as the input,
+the same number of chunks, or arrive at the same time. For example, a
+Hash stream will only ever have a single chunk of output which is
+provided when the input is ended. A zlib stream will either produce
+much smaller or much larger than its input.
-Override the `_transform(chunk, outputFunction, callback)` method in
-your implementation classes to take advantage of it.
+Rather than implement the `_read()` and `_write()` methods, Transform
+classes must implement the `_transform()` method, and may optionally
+also implement the `_flush()` method. (See below.)
-### new Transform(options)
+### new stream.Transform([options])
-* `options` {Object} Passed to both the Writable and Readable
+* `options` {Object} Passed to both Writable and Readable
constructors.
-Make sure to call the `Transform` constructor in your extension
-classes, or else the stream will not be properly initialized.
+In classes that extend the Transform class, make sure to call the
+constructor so that the buffering settings can be properly
+initialized.
-### transform.\_transform(chunk, outputFn, callback)
+### transform.\_transform(chunk, encoding, callback)
-* `chunk` {Buffer} The chunk to be transformed.
-* `outputFn` {Function} Call this function with any output data to be
- passed to the readable interface.
+* `chunk` {Buffer | String} The chunk to be transformed. Will always
+ be a buffer unless the `decodeStrings` option was set to `false`.
+* `encoding` {String} If the chunk is a string, then this is the
+ encoding type. (Ignore if `decodeStrings` chunk is a buffer.)
* `callback` {Function} Call this function (optionally with an error
argument) when you are done processing the supplied chunk.
-**Note: This function is not implemented in the Transform base class.**
-Rather, it is up to you to implement `_transform` in your extension
-classes.
+Note: **This function MUST NOT be called directly.** It should be
+implemented by child classes, and called by the internal Transform
+class methods only.
+
+All Transform stream implementations must provide a `_transform`
+method to accept input and produce output.
`_transform` should do whatever has to be done in this specific
Transform class, to handle the bytes being written, and pass them off
to the readable portion of the interface. Do asynchronous I/O,
process things, and so on.
+Call `transform.push(outputChunk)` 0 or more times to generate output
+from this input chunk, depending on how much data you want to output
+as a result of this chunk.
+
Call the callback function only when the current chunk is completely
-consumed. Note that this may mean that you call the `outputFn` zero
-or more times, depending on how much data you want to output as a
-result of this chunk.
+consumed. Note that there may or may not be output as a result of any
+particular input chunk.
This method is prefixed with an underscore because it is internal to
the class that defines it, and should not be called directly by user
programs. However, you **are** expected to override this method in
your own extension classes.
-### transform.\_flush(outputFn, callback)
+### transform.\_flush(callback)
-* `outputFn` {Function} Call this function with any output data to be
- passed to the readable interface.
* `callback` {Function} Call this function (optionally with an error
argument) when you are done flushing any remaining data.
-**Note: This function is not implemented in the Transform base class.**
-Rather, it is up to you to implement `_flush` in your extension
-classes optionally, if it applies to your use case.
+Note: **This function MUST NOT be called directly.** It MAY be implemented
+by child classes, and if so, will be called by the internal Transform
+class methods only.
In some cases, your transform operation may need to emit a bit more
data at the end of the stream. For example, a `Zlib` compression
@@ -488,18 +671,98 @@ can with what is left, so that the data will be complete.
In those cases, you can implement a `_flush` method, which will be
called at the very end, after all the written data is consumed, but
before emitting `end` to signal the end of the readable side. Just
-like with `_transform`, call `outputFn` zero or more times, as
-appropriate, and call `callback` when the flush operation is complete.
+like with `_transform`, call `transform.push(chunk)` zero or more
+times, as appropriate, and call `callback` when the flush operation is
+complete.
This method is prefixed with an underscore because it is internal to
the class that defines it, and should not be called directly by user
programs. However, you **are** expected to override this method in
your own extension classes.
+### Example: `SimpleProtocol` parser
-## Class: PassThrough
+The example above of a simple protocol parser can be implemented much
+more simply by using the higher level `Transform` stream class.
+
+In this example, rather than providing the input as an argument, it
+would be piped into the parser, which is a more idiomatic Node stream
+approach.
+
+```javascript
+function SimpleProtocol(options) {
+ if (!(this instanceof SimpleProtocol))
+ return new SimpleProtocol(options);
+
+ Transform.call(this, options);
+ this._inBody = false;
+ this._sawFirstCr = false;
+ this._rawHeader = [];
+ this.header = null;
+}
+
+SimpleProtocol.prototype = Object.create(
+ Transform.prototype, { constructor: { value: SimpleProtocol }});
+
+SimpleProtocol.prototype._transform = function(chunk, encoding, done) {
+ if (!this._inBody) {
+ // check if the chunk has a \n\n
+ var split = -1;
+ for (var i = 0; i < chunk.length; i++) {
+ if (chunk[i] === 10) { // '\n'
+ if (this._sawFirstCr) {
+ split = i;
+ break;
+ } else {
+ this._sawFirstCr = true;
+ }
+ } else {
+ this._sawFirstCr = false;
+ }
+ }
+
+ if (split === -1) {
+ // still waiting for the \n\n
+ // stash the chunk, and try again.
+ this._rawHeader.push(chunk);
+ } else {
+ this._inBody = true;
+ var h = chunk.slice(0, split);
+ this._rawHeader.push(h);
+ var header = Buffer.concat(this._rawHeader).toString();
+ try {
+ this.header = JSON.parse(header);
+ } catch (er) {
+ this.emit('error', new Error('invalid simple protocol data'));
+ return;
+ }
+ // and let them know that we are done parsing the header.
+ this.emit('header', this.header);
+
+ // now, because we got some extra data, emit this first.
+ this.push(b);
+ }
+ } else {
+ // from there on, just provide the data to our consumer as-is.
+ this.push(b);
+ }
+ done();
+};
+
+var parser = new SimpleProtocol();
+source.pipe(parser)
+
+// Now parser is a readable stream that will emit 'header'
+// with the parsed header data.
+```
+
+
+## Class: stream.PassThrough
This is a trivial implementation of a `Transform` stream that simply
passes the input bytes across to the output. Its purpose is mainly
for examples and testing, but there are occasionally use cases where
it can come in handy.
+
+
+[EventEmitter]: events.html#events_class_events_eventemitter
Please sign in to comment.
Something went wrong with that request. Please try again.