AsyncWrapper target

Rolf Kristensen edited this page Jan 13, 2017 · 17 revisions
Clone this wiki locally

Provides asynchronous, buffered execution of target writes.

Supported in .NET, Silverlight, Compact Framework and Mono.

Configuration Syntax

<targets>
  <target xsi:type="AsyncWrapper"
          name="String"
          queueLimit="Integer"
          timeToSleepBetweenBatches="Integer"
          batchSize="Integer"
          overflowAction="Enum">
    <target xsi:type="wrappedTargetType" ...target properties... />
  </target>
</targets>

Parameters

General Options

name - Name of the target.

Buffering Options

queueLimit - Limit on the number of requests in the lazy writer thread request queue. Integer Default: 10000

timeToSleepBetweenBatches - Time in milliseconds to sleep between batches. Integer Default: 50. When set to 0, this will lead to a high CPU usage.

batchSize - Number of log events that should be processed in a batch by the lazy writer thread. Integer Default: 100 (NLog 4.4.2 and newer has Default: 200)

fullBatchSizeWriteLimit - Max number of consecutive full batchSize writes to perform within the same timer event. Integer Default: 5. Introduced in NLog 4.4.2

overflowAction - Action to be taken when the lazy writer thread request queue count exceeds the set limit. Default: Discard
Possible values:

  • Block - Block until there's more room in the queue.
  • Discard - Discard the overflowing item.
  • Grow - Grow the queue.

optimizeBufferReuse - Instead of allocating new buffers for every batchSize write, then it reuse the same buffer. This means that the wrapped target no longer can take ownership of the buffers. All targets in the NLog package supports this mode. It is enabled by automatically if the wrapped target has enabled optimizeBufferReuse. Introduced in NLog 4.4.2

Remarks

Async attribute

Asynchronous target wrapper allows the logger code to execute more quickly, by queuing messages and processing them in a separate thread. You should wrap targets that spend a non-trivial amount of time in their Write() method with asynchronous target to speed up logging. Because asynchronous logging is quite a common scenario, NLog supports a shorthand notation for wrapping all targets with AsyncWrapper. Just add async="true" to the <targets/> element in the configuration file.

Example:

<targets async="true"> 
  ... your targets go here ...
</targets>

AsyncWrapper and <rules>

When using the AsyncWrapper, do write to the wrapper in your <rules> section! In the following example: do write to "target2". If the <logger> is writing to "target1", the messages are not written asynchronously!

   <targets>
      <target name="target2" xsi:type="AsyncWrapper">
        <target name ="target1" xsi:type="File"
                    fileName="c:/temp/test.log" layout="${message}"
                    keepFileOpen="true" />
      </target>
    <rules>
      <logger name="*" minlevel="Info" writeTo="target2"/>
    </rules>
  </targets> 

Async attribute and AsyncWrapper

Don't combine the Async attribute and AsyncWrapper. This will only slow down processing and will behave unreliably.

Async attribute will discard by default

The async attribute is a shorthand for:

xsi:type="AsyncWrapper overflowAction="Discard" queueLimit="10000" batchSize="100" timeToSleepBetweenBatches="50"

So if you write a lot of messages (more then 10000) in a short time, it's possible that messages will be lost. This is intended behavior as keeping all the messages or waiting for all the messages to be written, could have impact on the performance of your program.

If you need all the log messages, do use the AsyncWrapper instead of the async attribute.

Asynchronously writing and threads

When messages are written asynchronously, this is done in another thread. Some targets require to write on the main thread and so if asynchronous writing is used, the message get lost.

BufferingWrapper and Async

The BufferingWrapper can write asynchronously by itself. No need to use the async attribute or AsyncWrapper. See remarks at the BufferingWrapper.