RequestCaptureStream does not write triggering record #958

Closed
gerrard00 opened this Issue Nov 25, 2015 · 5 comments

Projects

None yet

3 participants

@gerrard00

I'm trying to use RequestCaptureStream with my restify app. I need to write to multiple streams and I would like the escalation feature of RequestCaptureStream to apply to all of those streams. Unfortunately, I haven't been able to wrap my head around the correct way to accomplish this.

Here's a simplified example of what I'm trying to do:

'use strict';

let bunyan = require('bunyan');
let restify = require('restify');
let streams = [];
// the next two arrays are just for testing
let streamsWithCapture = [];
let streamsWithCaptureFirst = [];

streams.push({
  stream: process.stdout,
  level: 'info' 
});

streams.push({
  level: 'info',
  stream: require('fs').createWriteStream('/var/tmp/debug.log')
});

let requestCaptureStream = {
  level: bunyan.DEBUG,
  type: 'raw',
  stream: new restify.bunyan.RequestCaptureStream({
      level: bunyan.WARN,
      streams: streams
  })
}

streamsWithCapture.push(requestCaptureStream);
streamsWithCaptureFirst = [requestCaptureStream].concat(streams);

let logger = bunyan.createLogger({
    name: 'My Service',
    serializers: bunyan.stdSerializers,
    //testing this raw type, seems to have no effect
    type: 'raw',
    // without requestCaptureStream in array,
    // this writes info and error, but ignores debug
    //streams: streams });

    // with requestCaptureStream in array,
    // this writes the info and debug, but no error
    //streams: streamsWithCapture});

    // with just requestCaptureStream in array
    // this writes the info and debug, but no error
    streams: [ requestCaptureStream ]});

    // with requestCaptureStream first in array
    // this writes all of the messages, but writes
    // the info message twice
   // streams: streamsWithCaptureFirst });

logger.info('this is info');
logger.debug('this is debug');

if (process.argv[2] === 'error') {
  logger.error('error');
}

I'm testing my example app with:

nodemon index.js error | bunyan

I'm sure that I've missed something obvious, but any input is appreciated before I start diving into the source for RequestCaptureStream.

Thanks!

@yunong
Member
yunong commented Nov 26, 2015

I did some digging, and it looks like your example is not working because of two issues.

  1. you're not adding a req_id field to your logging statements. The request by default expects you to always have a req_id field for each logging statement, unless you override it with dumpDefault: true as part of the arguments. See here: https://github.com/restify/node-restify/blob/5.x/lib/bunyan_helper.js#L69-L72

  2. The reason you don't see the error log is that the request capture stream does not actually log out the error statement that triggers the dump of the logging records. Notice here: https://github.com/restify/node-restify/blob/5.x/lib/bunyan_helper.js#L182 that we only write the log statement to the ring buffer if its level didn't trigger the dump. If it did, then we only dump out what is currently in the ring, and not that statement. If you add an additional stream to your logger, say process.stdout, you will see the error log.

  3. might reasonable be a bug -- though I think the reason we don't dump it might be that we don't want to have 2 identical error records. Most people just redirect both the RCS and regular streams to the same file, and this would result in 2 error records.

@gerrard00

Thank you for looking into this.

  1. This is just a side effect of my simplifying the code to post here. In the real app the req_id is included in the log messages and I see that in the output.

  2. I'm a bit confused, because that would lead me to believe that the scenario that I labelled "streamsWithCaptureFirst " would have worked. In that case, I created a logger with the RequestStream as well as the real streams that I want to target. Unfortunately, that led to the info level messages which would be logged even without the dump being triggered being duplicated in the output.

I need my app to write the logs to multiple streams, including a stream which is directed to Logentries (using https://github.com/logentries/le_node). I'm not sure how to accomplish that based on the results of my testing. My naive expectation was that the RequestCaptureStream would act as a facade for all of my real streams.

I'll try to dig into the RequestCaputreStream code a bit and maybe things will become clearer.

@yunong
Member
yunong commented Nov 26, 2015

there's definitely a bug in RCS where we're not capturing the logging record that triggers the dump of all records. That's what causing your error record to not come through. The reason 'streamswithcapturefirst' works is because the records you're seeing are coming from both the RCS and stdout.

Effectively you're doing something like this

records -> RCS && stdout. But since there's a bug in RCS where the triggering record (in this case the error record) doesn't get written out, then you should expect to see info and debug from the RCS and only info and error for stdout (since the logger by default logs at info). The order of the streams has no effect on the output.

Does this make sense? Your code is working as expected, but due to a bug in RCS not logging out the triggering record (in this case error), and the fact that you're conflating the sink of your records, it might appear from inspection that RCS is not working.

@gerrard00

That makes sense. If I can get RCS to dump the triggering record, then my original code using only the RCS with my logger should work correctly. I'll take a look at making that fix after the holiday.

@gerrard00 gerrard00 changed the title from Need an example of usage of RequestCaptureStream to RequestCaptureStream does not write triggering record Nov 30, 2015
@DonutEspresso DonutEspresso referenced this issue May 11, 2016
Open

Preparing 5.x for release #1082

10 of 10 tasks complete
@yunong
Member
yunong commented May 12, 2016

fixed via 5425d4d. Thanks @gerrard00 for your contribution.

@yunong yunong closed this May 12, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment