Skip to content

Functional overview

Dawid Ciężarkiewicz edited this page Aug 26, 2016 · 3 revisions

Functional flow overview

slog data flow forms two trees, connected with their roots:

info!  ----> R1 Logger -                                                 -> filter Info -> Streamer as Json 
         /              \                                              /
trace!` -                -> server Logger -> ROOT -> duplicate drain -<
                       /                                               \
                      /                                                  -> Streamer to syslog
error! ---> R2 Logger

The flow goes from left to right.

Each time info! or similar logging statement is encountered it contains some meta information and key-value pairs. It's passed to a Logger instance. Logger will add it's own key-value pairs. Each Logger can be a child of another Logger which will add it's key-value pairs to the logging record. This will continue all the way to the root Logger, which has a drain associated with it. The log record information with all the key-value pairs accumulated will be forwarded to the the drain, which can consist of sub-drains.

In the above example R1 stands for resource 1 - let's say a client connection. R1 logger will contain key-value pairs for client IP, id etc. It's parent - Server logger can contain key-value pairs for start time, listening port, number of active connections, etc. Root logger typically would contain build time, revision, hostname, etc.

Duplicate drain will send the record information twice: to each of it's children:

  • First to filter discarding log records that are not at least of Info logging level, and forwarding the rest to a streamer which writes logging record using given format (Json)
  • Streamer writing to syslog
You can’t perform that action at this time.