Skip to content

Commit

Permalink
everything must be 80 columns like
Browse files Browse the repository at this point in the history
God intended
  • Loading branch information
yunong committed Sep 21, 2015
1 parent e90ce34 commit 32fafa5
Show file tree
Hide file tree
Showing 4 changed files with 26 additions and 20 deletions.
6 changes: 3 additions & 3 deletions .jscsrc
Expand Up @@ -20,9 +20,9 @@
// alignment rules
"maximumLineLength": {
"value": 80,
"allowComments": true,
"allowUrlComments": true,
"allowRegex": true
"allowComments": false,
"allowUrlComments": false,
"allowRegex": false
},
"validateIndentation": 4,

Expand Down
27 changes: 16 additions & 11 deletions lib/bunyan_helper.js
Expand Up @@ -46,24 +46,29 @@ function appendStream(streams, s) {


///--- API

/**
* A Bunyan stream to capture records in a ring buffer and only pass through
* on a higher-level record. E.g. buffer up all records but only dump when
* getting a WARN or above.
* @public
* @class
* @param {Object} opts contains the parameters:
* @param {Object} opts.stream The stream to which to write when dumping captured
* records. One of `stream` or `streams` must be specified.
* @param {Array} opts.streams One of `stream` or `streams` must be specified.
* @param {Number | String} opts.level The level at which to trigger dumping captured
* records. Defaults to bunyan.WARN.
* @param {Number} opts.maxRecords Number of records to capture. Default 100.
* @param {Number} opts.maxRequestIds Number of simultaneous request id capturing
* buckets to maintain. Default 1000.
* @param {Boolean} opts.dumpDefault If true, then dump captured records on the
* *default* request id when dumping. I.e. dump records logged without
* @param {Object} opts.stream The stream to which to write when
* dumping captured records. One of `stream`
* or `streams` must be specified.
* @param {Array} opts.streams One of `stream` or `streams` must be
* specified.
* @param {Number | String} opts.level The level at which to trigger dumping
* captured records. Defaults to
* bunyan.WARN.
* @param {Number} opts.maxRecords Number of records to capture. Default
* 100.
* @param {Number} opts.maxRequestIds Number of simultaneous request id
* capturing buckets to maintain. Default
* 1000.
* @param {Boolean} opts.dumpDefault If true, then dump captured records on
* the *default* request id when dumping.
* I.e. dump records logged without
* "req_id" field. Default false.
*/
function RequestCaptureStream(opts) {
Expand Down
3 changes: 2 additions & 1 deletion lib/index.js
Expand Up @@ -50,7 +50,8 @@ function createServer(options) {
* @public
* @function realizeUrl
* @param {String} pattern a url string
* @param {Object} params a hash of parameter names to values for substitution
* @param {Object} params a hash of parameter names to values for
* substitution
* @returns {String}
*/
function realizeUrl(pattern, params) {
Expand Down
10 changes: 5 additions & 5 deletions lib/response.js
Expand Up @@ -305,11 +305,11 @@ Response.prototype.send = function send(code, body, headers) {
function _cb(err, _body) {
// the problem here is that if the formatter throws an error, we can't
// actually format the error again, since the formatter already failed.
// So all we can do is send back a 500 with no body, since we don't know
// at this point what format to send the error as. Additionally, the current
// 'after' event is emitted _before_ we send the response, so there's no way
// to re-emit the error here. TODO: clean up 'after' even emitter so we
// pick up the error here.
// So all we can do is send back a 500 with no body, since we don't
// know at this point what format to send the error as. Additionally,
// the current 'after' event is emitted _before_ we send the response,
// so there's no way to re-emit the error here. TODO: clean up 'after'
// even emitter so we pick up the error here.
if (err) {
self._data = null;
self.statusCode = 500;
Expand Down

0 comments on commit 32fafa5

Please sign in to comment.