A high-performance Serilog sink that writes log events to Microsoft SQL Server using bulk insert operations.
- High Performance: Uses
SqlBulkCopyfor efficient bulk insert operations - Flexible Column Mapping: Customize which log event properties map to which columns
- Configurable Batching: Control batch size and timeout for optimal performance
- Standard Mappings: Includes default mappings for common log properties
- Custom Properties: Easily add custom property mappings
- Rich Data Types: Support for various data types including structured properties as JSON
- Distributed Tracing: Built-in support for TraceId and SpanId
- Auto Truncation: Automatically truncates string values to match column size constraints, preventing insert errors
This sink is designed to be a high-performance, lightweight alternative to Serilog.Sinks.MSSqlServer with significant improvements in speed and memory efficiency.
Based on Serilog.Sinks.SqlServer.Benchmark tests (100 log events per batch):
| Method | Mean | Rank | Gen0 | Gen1 | Allocated |
|---|---|---|---|---|---|
| SqlServerSink | 2.082 ms | 1 | 7.8125 | - | 438.31 KB |
| MSSqlServerSink | 2.666 ms | 2 | 117.1875 | 27.3438 | 5773.93 KB |
Key Performance Benefits:
- ~22% faster execution time (2.082 ms vs 2.666 ms)
- ~92% fewer allocations (438 KB vs 5,774 KB per batch)
- Significantly reduced GC pressure from 13x lower memory allocations
- Optimized bulk copy operations with minimal overhead
- Streamlined Architecture: Focused solely on high-performance SQL Server logging without legacy compatibility layers
- Efficient Memory Usage: Minimal allocations through careful use of
ArrayBufferWriter,Span<T>, and modern .NET APIs - Optimized JSON Serialization: Custom
JsonWriterusingUtf8JsonWriterfor zero-copy serialization - Direct Bulk Copy: Simplified data pipeline from log events to
SqlBulkCopywith fewer intermediate transformations - No Reflection Overhead: Uses pre-defined mappings with delegate-based value extraction
- Avoids DataTable: Uses lightweight
IDataReaderimplementation instead ofDataTable, eliminating the overhead of DataTable's internal structures
- Fewer dependencies: Minimal external packages (only
Serilog,Microsoft.Data.SqlClient, and polyfills) - Smaller footprint: Focused implementation without legacy features
- Easier to understand: Clear, modern C# code using latest language features
- Better maintainability: Single-purpose design makes updates and fixes straightforward
Install the sink via NuGet:
dotnet add package Serilog.Sinks.SqlServerOr via Package Manager Console:
Install-Package Serilog.Sinks.SqlServerCreate a table in your SQL Server database to store log events:
CREATE TABLE [dbo].[LogEvent]
(
[Id] BIGINT IDENTITY(1,1) NOT NULL,
[Timestamp] DATETIMEOFFSET NOT NULL,
[Level] NVARCHAR(50) NOT NULL,
[Message] NVARCHAR(MAX) NULL,
[TraceId] NVARCHAR(100) NULL,
[SpanId] NVARCHAR(100) NULL,
[Exception] NVARCHAR(MAX) NULL,
[Properties] NVARCHAR(MAX) NULL,
[SourceContext] NVARCHAR(1000) NULL,
CONSTRAINT [PK_LogEvent] PRIMARY KEY CLUSTERED ([Id] ASC),
INDEX [IX_LogEvent_TimeStamp] NONCLUSTERED ([Timestamp] DESC),
INDEX [IX_LogEvent_Level] NONCLUSTERED ([Level] ASC),
INDEX [IX_LogEvent_TraceId] NONCLUSTERED ([TraceId] ASC),
)
WITH (DATA_COMPRESSION = PAGE);using Serilog;
Log.Logger = new LoggerConfiguration()
.WriteTo.SqlServer(
connectionString: "Data Source=(local);Initial Catalog=Serilog;Integrated Security=True;TrustServerCertificate=True;",
tableName: "LogEvent",
tableSchema: "dbo"
)
.CreateLogger();
Log.Information("Hello, SQL Server!");
Log.CloseAndFlush();using Serilog;
using Serilog.Sinks.SqlServer;
Log.Logger = new LoggerConfiguration()
.WriteTo.SqlServer(config =>
{
config.ConnectionString = "Data Source=(local);Initial Catalog=Serilog;Integrated Security=True;TrustServerCertificate=True;";
config.TableName = "LogEvent";
config.TableSchema = "dbo";
config.MinimumLevel = LogEventLevel.Information;
config.BatchSizeLimit = 100;
config.BufferingTimeLimit = TimeSpan.FromSeconds(5);
})
.CreateLogger();You can configure the sink using appsettings.json with the Serilog.Settings.Configuration package:
appsettings.json:
{
"ConnectionStrings": {
"Serilog": "Data Source=(local);Initial Catalog=Serilog;Integrated Security=True;TrustServerCertificate=True;"
},
"Serilog": {
"Using": [ "Serilog.Sinks.SqlServer" ],
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
},
"WriteTo": [
{
"Name": "SqlServer",
"Args": {
"connectionString": "Data Source=(local);Initial Catalog=Serilog;Integrated Security=True;TrustServerCertificate=True;",
"tableName": "LogEvent",
"tableSchema": "dbo"
}
}
],
"Enrich": [ "FromLogContext" ]
}
}Program.cs:
using Serilog;
var builder = WebApplication.CreateBuilder(args);
builder.Host
.UseSerilog((context, services, configuration) => configuration
.ReadFrom.Configuration(context.Configuration)
);
var app = builder.Build();
app.UseSerilogRequestLogging();
app.Run();Or for a console application:
using Serilog;
using Microsoft.Extensions.Configuration;
var configuration = new ConfigurationBuilder()
.SetBasePath(Directory.GetCurrentDirectory())
.AddJsonFile("appsettings.json")
.Build();
Log.Logger = new LoggerConfiguration()
.ReadFrom.Configuration(configuration)
.CreateLogger();
try
{
Log.Information("Application starting");
// Your application code
}
finally
{
Log.CloseAndFlush();
}The SqlServerSinkOptions class inherits from BatchingOptions and provides the following properties:
| Property | Default | Description |
|---|---|---|
ConnectionString |
- | SQL Server connection string (required) |
TableName |
"LogEvent" |
Name of the table to write logs to |
TableSchema |
"dbo" |
Schema of the table |
MinimumLevel |
LevelAlias.Minimum |
Minimum log event level |
BulkCopyOptions |
SqlBulkCopyOptions.Default |
SqlBulkCopy options for bulk insert operations |
Mappings |
MappingDefaults.StandardMappings |
Column mappings for log event properties |
BatchSizeLimit |
1000 |
Number of log events to batch before writing (inherited) |
BufferingTimeLimit |
2 seconds |
Maximum time to wait before flushing a batch (inherited) |
The sink includes the following standard column mappings:
| Column Name | Type | Description | Nullable | Size |
|---|---|---|---|---|
Timestamp |
DateTimeOffset |
UTC timestamp of the log event | No | - |
Level |
string |
Log level (e.g., "Information", "Error") | No | 50 |
Message |
string |
Rendered log message | Yes | MAX |
TraceId |
string |
Distributed tracing trace ID | Yes | 100 |
SpanId |
string |
Distributed tracing span ID | Yes | 100 |
Exception |
string |
Exception details as JSON | Yes | MAX |
Properties |
string |
Additional properties as JSON | Yes | MAX |
SourceContext |
string |
Source context (typically class name) | Yes | 1000 |
The Exception column stores exception details as a JSON object with the following structure:
{
"Message": "The error message",
"BaseMessage": "Inner exception message (if present)",
"Type": "System.InvalidOperationException",
"Text": "Full exception text including stack trace",
"HResult": -2146233079,
"ErrorCode": -2147467259,
"Source": "MyApplication",
"MethodName": "MyMethod",
"ModuleName": "MyAssembly",
"ModuleVersion": "1.0.0.0"
}Key fields:
Message- The exception's primary error messageBaseMessage- Message from the innermost exception (if there's an inner exception chain)Type- Fully qualified type name of the exceptionText- Complete exception text including stack trace (fromToString())HResult- The HRESULT error codeErrorCode- Error code forExternalExceptiontypesSource- The application or object that caused the errorMethodName- Name of the method that threw the exceptionModuleName- Name of the assembly containing the throwing methodModuleVersion- Version of the assembly
Note: Aggregate exceptions with a single inner exception are automatically flattened to the inner exception.
The Properties column stores log event properties as a JSON object. Property values are serialized according to their type:
Scalar values:
{
"UserId": 123,
"UserName": "John Doe",
"IsActive": true,
"Amount": 99.99,
"RequestId": "550e8400-e29b-41d4-a716-446655440000",
"Timestamp": "2024-01-15T10:30:45Z"
}Structured values:
{
"User": {
"Id": 123,
"Name": "John Doe",
"Email": "john@example.com"
}
}Arrays/Sequences:
{
"Roles": ["Admin", "User", "Manager"],
"Numbers": [1, 2, 3, 4, 5]
}Supported scalar types:
- Primitive types:
string,bool,int,long,double,float,decimal,byte,short, etc. - Date/time types:
DateTime,DateTimeOffset,TimeSpan,DateOnly,TimeOnly(as ISO strings) - Other types:
Guid(as string),Enum(as string),BigInteger(as string),char(as string) nullvalues are preserved
Note: By default, properties enriched via
FromLogContext()orWithProperty()are included in this column. You can extract specific properties to dedicated columns using custom mappings (see below).
Add custom property mappings to log additional data:
Log.Logger = new LoggerConfiguration()
.Enrich.WithProperty("ApplicationName", "MyApp")
.Enrich.WithProperty("ApplicationVersion", "1.0.0")
.Enrich.WithProperty("EnvironmentName", "Production")
.WriteTo.SqlServer(config =>
{
config.ConnectionString = connectionString;
config.TableName = "LogExtended";
// Add custom property mappings
config.AddPropertyMapping("ApplicationName");
config.AddPropertyMapping("ApplicationVersion");
config.AddPropertyMapping("EnvironmentName");
})
.CreateLogger();Corresponding table structure:
CREATE TABLE [dbo].[LogExtended]
(
[Id] BIGINT IDENTITY(1,1) NOT NULL,
[Timestamp] DATETIMEOFFSET NOT NULL,
[Level] NVARCHAR(50) NOT NULL,
[Message] NVARCHAR(MAX) NULL,
[TraceId] NVARCHAR(100) NULL,
[SpanId] NVARCHAR(100) NULL,
[Exception] NVARCHAR(MAX) NULL,
[Properties] NVARCHAR(MAX) NULL,
[SourceContext] NVARCHAR(1000) NULL,
[ApplicationName] NVARCHAR(500) NULL,
[ApplicationVersion] NVARCHAR(500) NULL,
[EnvironmentName] NVARCHAR(500) NULL,
CONSTRAINT [PK_LogExtended] PRIMARY KEY CLUSTERED ([Id] ASC)
);For more control, define custom mappings manually:
config.Mappings.Add(
new ColumnMapping<LogEvent>(
ColumnName: "MachineName",
ColumnType: typeof(string),
GetValue: logEvent => Environment.MachineName,
Nullable: true,
Size: 100
)
);Note: When you specify a
Sizefor string columns, the sink automatically truncates values that exceed the specified length to prevent SQL insert errors. For example, ifSizeis set to 100 and a value is 150 characters, it will be truncated to 100 characters before insertion. Columns without aSizespecified (or withSize: null) will not be truncated.
config.Mappings.Clear(); // Remove all default mappings
config.Mappings.Add(MappingDefaults.TimestampMapping);
config.Mappings.Add(MappingDefaults.LevelMapping);
config.Mappings.Add(MappingDefaults.MessageMapping);
// Add only the mappings you needusing Serilog;
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSerilog(loggerConfiguration =>
{
loggerConfiguration
.ReadFrom.Configuration(builder.Configuration)
.Enrich.FromLogContext()
.WriteTo.Console()
.WriteTo.SqlServer(config =>
{
config.ConnectionString = builder.Configuration.GetConnectionString("Serilog");
config.TableName = "LogEvent";
});
});
var app = builder.Build();
app.UseSerilogRequestLogging();
app.Run();{
"ConnectionStrings": {
"Serilog": "Data Source=(local);Initial Catalog=Serilog;Integrated Security=True;TrustServerCertificate=True;"
},
"Serilog": {
"MinimumLevel": {
"Default": "Information",
"Override": {
"Microsoft": "Warning",
"System": "Warning"
}
}
}
}Adjust batch size and timeout based on your logging volume:
config.BatchSizeLimit = 1000; // Larger batches for high-volume scenarios
config.BufferingTimeLimit = TimeSpan.FromSeconds(10); // Longer timeout for efficiencyConfigure bulk copy behavior:
using Microsoft.Data.SqlClient;
config.BulkCopyOptions = SqlBulkCopyOptions.TableLock | SqlBulkCopyOptions.FireTriggers;Create appropriate indexes for your query patterns:
-- Optimize for time-based queries
CREATE INDEX [IX_LogEvent_TimeStamp] ON [dbo].[LogEvent] ([Timestamp] DESC);
-- Optimize for filtering by level
CREATE INDEX [IX_LogEvent_Level] ON [dbo].[LogEvent] ([Level] ASC);
-- Optimize for distributed tracing queries
CREATE INDEX [IX_LogEvent_TraceId] ON [dbo].[LogEvent] ([TraceId] ASC);- Use Batching: The sink uses batching by default for optimal performance
- Add Indexes: Create indexes on columns you frequently query (Timestamp, Level, TraceId)
- Data Compression: Use page compression to reduce storage costs
- Partition Large Tables: Consider table partitioning for very large log volumes
- Implement Archival: Archive or delete old log data regularly
- Monitor Performance: Monitor SQL Server performance and adjust batch sizes accordingly
- Use Connection Pooling: Connection pooling is enabled by default in SqlClient
This library does not automatically create database tables. You are responsible for creating the target log table before using the sink.
The sink prioritizes performance and efficiency by design. Automatic table creation would introduce several challenges:
- Complexity: Managing different SQL Server versions, permissions, and configuration options would add significant complexity
- Performance Overhead: Schema checks and potential table creation on startup would add latency to application initialization
- Security Concerns: Many production environments don't grant
CREATE TABLEpermissions to application accounts - Flexibility Loss: Users have different requirements for:
- Partitioning strategies
- Index configurations
- Data compression settings
- Filegroup placement
- Custom column types and constraints
You have full control over your table schema. At minimum, create columns that match your configured mappings:
CREATE TABLE [dbo].[LogEvent]
(
[Id] BIGINT IDENTITY(1,1) NOT NULL,
[Timestamp] DATETIMEOFFSET NOT NULL,
[Level] NVARCHAR(50) NOT NULL,
[Message] NVARCHAR(MAX) NULL,
[TraceId] NVARCHAR(100) NULL,
[SpanId] NVARCHAR(100) NULL,
[Exception] NVARCHAR(MAX) NULL,
[Properties] NVARCHAR(MAX) NULL,
[SourceContext] NVARCHAR(1000) NULL,
CONSTRAINT [PK_LogEvent] PRIMARY KEY CLUSTERED ([Id] ASC)
)
WITH (DATA_COMPRESSION = PAGE);You can add indexes, partitioning, and other optimizations based on your specific requirements. See the Quick Start section for complete table examples.
Enable Serilog self-logging to diagnose connection problems:
using Serilog.Debugging;
SelfLog.Enable(Console.Error);Ensure your database table schema matches your column mappings. Missing columns will cause bulk insert to fail.
- Increase
BatchSizeLimitfor high-volume scenarios - Increase
BufferingTimeLimitto allow more batching - Ensure proper indexing on the database table
- Monitor SQL Server performance counters
Contributions are welcome! Please feel free to submit a Pull Request.
This project is licensed under the MIT License - see the LICENSE file for details.