Skip to content

Latest commit

 

History

History
119 lines (96 loc) · 12.5 KB

Comparison.md

File metadata and controls

119 lines (96 loc) · 12.5 KB

Home | Index | Examples | Guidance | FAQ | Listeners | Filters | Extensions

Comparison of logging frameworks

The following table shows how System.Diagnostics stacks up against some popular 3rd party logging frameworks, as well as the Enterprise Library Logging Application Block extensions from Microsoft, comparing the general features, the information logged, the filters that can be used and the output formats available.

The System.Diagnostics column indicates both the built in features, as well as the features available in the Essential.Diagnostics and other extensions.

Please contact me if you think any information in this table is out of date.

General Features System. Diagnostics log4net NLog Enterprise Library
Availability built-in 3rd party 3rd party Microsoft
Levels 5 5 6 5
Multiple sources
> Hierarchical sources
Extensible
Listener chaining
Delayed formatting
> Lambda
Templates
Logging interface
Dynamic configuration
Minimum trust TBA TBA
Trace .NET framework (WCF, WIF, System.Net, etc)
Source from .NET Trace
Log Information System. Diagnostics log4net NLog Enterprise Library
Event ID contrib extension
Priority
Process/thread information
ASP.NET information
Trace Context
> Correlation identifier
> Cross-process correlation
Exceptions via exception block
Filters System. Diagnostics log4net NLog Enterprise Library
Event level
Source
Property
String match
Expression
Priority
Listeners System. Diagnostics log4net NLog Enterprise Library
> Forward to .NET Trace limited limited
ASP.NET Trace
Chainsaw (log4j)
Colored Console
Console
Database
Debug
Event Log
Event Tracing (ETW)
File
Mail
Memory
MSMQ
Net Send
Remoting
Rolling File
Syslog (unix)
Telnet
UDP UdpPocketTrace extension
WMI
XML (Service Trace)

Note that for some features the answer is more complex than a simple table, e.g. System.Diagnostics provides some process/thread information in some listeners, with Essential.Diagnostics providing additional information.

In general features are only indicated where they are directly supported by the framework, for example any framework can log exception details as an argument or even just a string, but some have explicit overloads of logging methods with an Exception type parameter.

Similarly, while some explicitly support logging of lambda expressions, any framework with delayed formatting could pass in a wrapper object that evaluates a lambda when ToString() is called; or a framework that supports arbitrary Trace Context can be used as a correlation identifier, compared to those with explicit correlation identifier support.

Performance comparison

See the solution in the Performance folder of the source code for the test harness used for comparing performance.

A comparison of logging frameworks should also compare the overhead of the different frameworks, i.e. the overhead of adding trace messages and then ignoring or filtering them out (which should be the majority situation).

All the frameworks allow you to selectively turn on sections of messages to control the volume captured. Whilst the efficiency of capturing the messages you do want may be important, it is the overhead of ignoring the ones you don’t want that is generally the main concern.

The following table shows the results of some performance testing under the following scenarios:

  • All messages ignored, i.e. tracing turned off.
  • One source turned on at Warning level, capturing 16 messages per million trace statements.
  • All sources turned on at Warning level, capturing 245 messages per million trace statements.
  • One source turned to full (all messages), with others at Warning level, capturing 3,907 messages per million trace statements.

The source code for the performance testing application is available in the code repository if you would like to run the tests yourself. Note that the absolute values will change depending on the system they are run on.

Base test framework: 56 ms

Logging overhead (ms for 1 million log messages, lower is better):

Scenario (logged messages) System. Diagnostics — FileLog EX RollingFile log4net — RollingFile NLog — File Enterprise Application Block — RollingFlatFile
Logging off (0) 50 50 46 3 > 20,000
Single filtered (16) 59 54 43 8 > 20,000
Multiple filtered (245) 89 64 49 61 > 20,000
One full (3,907) 484 238 114 867 > 20,000

There are also several configurations in the test harness to compare various other scenarios, for example a source with switchValue=”Off” versus no source defined at all.

Note 1: Yes, the values for the Enterprise Application Block are what I am getting – around 21-22,000 milliseconds overhead compared to the other frameworks! Now, the EAB does not have delayed formatting, i.e. no overloads that take a format string and arguments, so my test harness does string.Format() for all messages but even removing that it still has 18-19,000 ms overhead.

Note 2: With System.Diagnostics to ignore all messages for a source set switchValue="Off" or simply leave out the source altogether. Having a source with no listeners, i.e. using , has slightly more overhead but isn't too bad.

If you don't clear the list it may look like you have no listeners but instead you will get all messages written to the default listener, which will severely impact the performance of your application. i.e. the worst thing you can do is set the trace switch to allow a lot of messages (e.g. All) without clearing the listeners:

<!-- Bad example: will severely impact performance (clear listeners, but leave source) -->
<source name="SourceWithBadSwitch" switchValue="All">
</source>
<!-- Correct way to turn logging off (set switch to Off, or simply delete completely) -->
<source name="MySource" switchValue="Off">
  <listeners>
    <clear/>
    <add name="filelog"/>
   </listeners>
</source>