New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Document how to integrate structlog with anything #18
Comments
I've added sentry support to a simple project where we use structlog. I have the impression I am doing it wrong, or at least that there is room for optimization. class SentryProcessor(object):
def __call__(self, wrapped_logger, method_name, event_dict):
kwargs = dict(msg=event_dict.pop('event'),
extra=event_dict)
if 'exception' in event_dict:
kwargs['exc_info'] = True
return kwargs
def configure_logging(level):
log_renderer = SentryProcessor()
LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'terse': {
'format': '%(message)s'
},
},
'handlers': {
'console': {
'level': level,
'class': 'logging.StreamHandler',
'formatter': 'terse'
},
'sentry': {
'level': 'ERROR',
'class': 'raven.handlers.logging.SentryHandler',
'dsn': SENTRY_DSN}
},
},
'loggers': {},
'root': {
'handlers': ['console', 'sentry'],
'level': level,
}
}
logging.config.dictConfig(LOGGING)
structlog.configure_once(
processors=[
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.TimeStamper(fmt='%Y-%m-%d %H:%M.%S'),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
SentryProcessor(),
],
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
) This formats the message so that all information is available in sentry. Unfortunately, now the console output is pretty terse. Is there a way to have different processing queues per handler? |
Two things here:
Does that answer your question? |
Sorry for the late reply, yes this answers my question. |
Not sentry-related, but... WE are using python-logstash which, like Sentry, supports the I tried subclassing class ExtraDataBoundLogger(BoundLogger):
def _proxy_to_logger(self, method_name, event=None, *event_args, **event_kw):
print "ExtraDataBoundLogger._proxy_to_logger: %s" % self._logger
try:
if event_args:
event_kw['positional_args'] = event_args
args, kw = self._process_event(method_name, event, event_kw)
return getattr(self._logger, method_name)(*args, extra=kw)
except DropEvent:
return But this seems to not have an effect - I would try a custom processor like @do3cc did above, but that also seems to have the wrong effect... Any tips would be appreciated. |
@hynek Looks like I solved my problem with a variation on @do3cc's solution: from structlog.processors import KeyValueRenderer
class MyAppDataProcessor(object):
def __call__(self, wrapped_logger, method_name, event_dict):
kwargs = dict(msg=event_dict.get('event'), # don't remove "event" from dict
extra=event_dict)
if 'exception' in event_dict:
kwargs['exc_info'] = True
return kwargs
class MyAppKeyValueRenderer(KeyValueRenderer):
def __call__(self, _, __, event_dict):
del event_dict['extra'] # don't render the "extra" key that we added
return ' '.join(k + '=' + repr(v)
for k, v in self._ordered_items(event_dict)) Then in structlog.configure(
processors=[
structlog.stdlib.filter_by_level,
structlog.stdlib.add_logger_name,
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
structlog.processors.TimeStamper(fmt="iso"),
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
MyAppDataProcessor(),
MyAppKeyValueRenderer(
key_order=['event', 'request_id'],
),
],
context_class=dict,
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
) This approach preserves the "extra" key in the event dict, but does not render it in the normal KeyValue rendering. With "extra" in the event_dict, it gets passed to my wrapped logger, and handled by the python-logstash logging handler properly. |
Hm so python-logstash is a pure stdlib logging handler. You may want to look into the new and upcoming (PR still open) stdlib features that may make it simpler for you. One thing though: if all you want is to send your log entries to log stash, you should investigate if there isn’t a library that allows you to do that without going all the way through stdlib logging. |
People ask about Sentry et all al the time and it’s easy to integrate so there should be a chapter on it.
The text was updated successfully, but these errors were encountered: