LLM observability with scoped tracing and OTLP export for .NET applications.
Add a project reference or, once published, install via NuGet:
dotnet add package Tracentic.SdkThe SDK targets .NET 6.0, 8.0, and 10.0.
Point the SDK at the Tracentic ingestion endpoint by setting Endpoint = "https://tracentic.dev" on TracenticOptions. This is the hosted service URL that receives spans over OTLP/HTTP JSON - use it unless you're running a self-hosted Tracentic deployment, in which case set your own URL.
builder.Services.AddTracentic(opts =>
{
opts.ApiKey = "your-api-key";
opts.Endpoint = "https://tracentic.dev";
opts.ServiceName = "my-service";
});Register Tracentic in your DI container at startup:
builder.Services.AddTracentic(opts =>
{
opts.ApiKey = "your-api-key";
opts.Endpoint = "https://tracentic.dev"; // options, defaults to this endpoint
opts.ServiceName = "my-service";
opts.Environment = "production";
// Required for cost tracking. Without this, llm.cost.total_usd is
// omitted and the SDK warns once per unpriced model.
opts.CustomPricing = new()
{
["claude-sonnet-4-20250514"] = (3.00, 15.00),
["gpt-4o"] = (2.50, 10.00),
};
opts.GlobalAttributes = new()
{
["region"] = "us-east-1",
["version"] = "2.1.0",
};
opts.GlobalAttributes = new()
{
["region"] = "us-east-1",
["version"] = "2.1.0",
};
});Then inject ITracentic and start tracing:
public class MyService(ITracentic tracentic)
{
public async Task<string> Summarize(string text)
{
var scope = tracentic.Begin("summarize", attributes: new()
{
["user_id"] = "user-123",
});
var startedAt = DateTimeOffset.UtcNow;
var result = await CallLlm(text);
var endedAt = DateTimeOffset.UtcNow;
tracentic.RecordSpan(scope, new TracenticSpan
{
StartedAt = startedAt,
EndedAt = endedAt,
Provider = "anthropic",
Model = "claude-sonnet-4-20250514",
InputTokens = result.Usage.InputTokens,
OutputTokens = result.Usage.OutputTokens,
OperationType = "chat",
});
return result.Text;
}
}Group related LLM calls under a logical scope. Nest scopes for multi-step pipelines:
var pipeline = tracentic.Begin("rag-pipeline", correlationId: "order-42");
// Child scope inherits the parent link automatically
var synthesis = pipeline.CreateChild("synthesis", attributes: new()
{
["strategy"] = "hybrid",
});tracentic.RecordError(scope, span, exception);For standalone LLM calls that don't belong to a larger operation:
tracentic.RecordSpan(new TracenticSpan
{
StartedAt = startedAt,
EndedAt = endedAt,
Provider = "openai",
Model = "gpt-4o-mini",
InputTokens = 200,
OutputTokens = 50,
OperationType = "chat",
});CustomPricing is required for cost tracking. The SDK does not ship with built-in pricing because model prices change frequently and vary by contract. If a span has token data but no matching pricing entry, llm.cost.total_usd is omitted and the SDK emits a warning once per model via System.Diagnostics.Trace.
opts.CustomPricing = new()
{
["claude-sonnet-4-20250514"] = (3.00, 15.00),
["gpt-4o"] = (2.50, 10.00),
};Static attributes applied to every span:
opts.GlobalAttributes = new()
{
["region"] = "us-east-1",
["version"] = "2.1.0",
};Dynamic attributes can be set/removed at runtime:
TracenticGlobalContext.Current.Set("deploy_id", "deploy-abc");
TracenticGlobalContext.Current.Remove("deploy_id");The SDK automatically registers middleware via a startup filter when using AddTracentic(). Configure per-request attributes:
opts.RequestAttributes = (context) => new Dictionary<string, object?>
{
["http.method"] = context.Request.Method,
["user_id"] = context.User.FindFirst("sub")?.Value,
};If you need to control where the middleware runs in the pipeline (e.g. after authentication so context.User is populated), call UseTracentic() explicitly instead:
app.UseAuthentication();
app.UseTracentic(); // must come after auth if RequestAttributes reads context.UserWhen UseTracentic() is called explicitly, the automatic startup filter registration is skipped.
Tracentic does not propagate scope IDs automatically - you pass them explicitly through whatever transport connects your services (HTTP headers, message properties, etc.).
For cross-service linking to work, both services must integrate the Tracentic SDK (or implement the OTLP JSON ingest API directly) and their API keys must belong to the same tenant. Spans from different tenants are isolated and cannot be linked.
Use the exported TracenticHeaders.ScopeId constant on both ends rather than a string literal - typos silently break linking.
Via HTTP header:
// Service A - outgoing request
var scope = tracentic.Begin("gateway-handler");
httpClient.DefaultRequestHeaders.Add(TracenticHeaders.ScopeId, scope.Id);
// Service B - incoming request
var parentScopeId = context.Request.Headers[TracenticHeaders.ScopeId].FirstOrDefault();
var linked = tracentic.Begin("worker", parentScopeId: parentScopeId);Via service bus message:
// Producer
var scope = tracentic.Begin("order-processor");
var message = new ServiceBusMessage(payload);
message.ApplicationProperties[TracenticHeaders.ScopeId] = scope.Id;
await sender.SendMessageAsync(message);
// Consumer
var parentScopeId = message.ApplicationProperties[TracenticHeaders.ScopeId] as string;
var linked = tracentic.Begin("fulfillment", parentScopeId: parentScopeId);Serverless runtimes freeze or kill the process between invocations, so the AppDomain.ProcessExit handler may never fire and any spans still in the buffer are lost. Force a flush before your handler returns:
public async Task<APIGatewayProxyResponse> Handler(
APIGatewayProxyRequest request,
ILambdaContext context)
{
try
{
return await DoWork(request);
}
finally
{
// Resolve the OTel TracerProvider from DI and force-flush
// before the runtime freezes the container.
_tracerProvider.ForceFlush(timeoutMilliseconds: 5000);
}
}Without this, you will see spans appear inconsistently - only when a container happens to be reused and the next invocation triggers a flush.
The SDK owns a single long-lived HttpClient dedicated to the ingest endpoint. Connections are pooled and recycled every 5 minutes so long-running processes pick up DNS changes. To customize TLS, proxy, or outbound HTTP middleware (e.g. Polly retry), supply your own HttpMessageHandler:
opts.HttpMessageHandlerFactory = () => new SocketsHttpHandler
{
PooledConnectionLifetime = TimeSpan.FromMinutes(5),
Proxy = new WebProxy("http://corp-proxy:8080"),
};
opts.ExportTimeout = TimeSpan.FromSeconds(10);The SDK owns the returned handler's lifetime and disposes it on shutdown. Do not share the handler across other HttpClient instances.
| Option | Default | Description |
|---|---|---|
ApiKey |
null |
API key. If null, spans are created locally but not exported |
ServiceName |
"unknown-service" |
Service identifier in the dashboard |
Endpoint |
"https://tracentic.dev" |
Tracentic ingestion endpoint. Use https://tracentic.dev for the hosted service. Override only for self-hosted deployments. |
Environment |
"production" |
Deployment environment tag |
Collector |
remote (cloud) | Where spans are sent. See TracenticCollector.Remote(...) |
CustomPricing |
null |
Model pricing for cost calculation |
GlobalAttributes |
null |
Static attributes on every span |
RequestAttributes |
null |
Per-request attribute callback (ASP.NET Core) |
AttributeLimits |
platform defaults | Limits on attribute count, key/value length |
HttpMessageHandlerFactory |
SocketsHttpHandler w/ 5-min pooled lifetime |
Custom HTTP transport for the OTLP exporter |
ExportTimeout |
30s |
Per-request timeout for OTLP exports |
Debug |
false |
Enable verbose diagnostic logging (see Debugging below) |
By default the SDK only emits warnings and errors through System.Diagnostics - export failures, missing pricing entries, and exceptions. To see the full export lifecycle (batch size, endpoint, success/failure, shutdown), enable debug mode:
builder.Services.AddTracentic(opts =>
{
opts.ApiKey = "...";
opts.ServiceName = "my-service";
opts.Debug = true;
});With Debug = true, the SDK writes verbose events to the Tracentic-Sdk EventSource. Capture them with dotnet-trace:
dotnet-trace collect --providers Tracentic-Sdk:Verbose -- dotnet runEvents emitted in debug mode:
| Event | Level | Message |
|---|---|---|
ExportStarted |
Verbose | Flushing {count} span(s) to {endpoint} |
ExportSucceeded |
Verbose | Export succeeded: HTTP {status} ({count} spans) |
ShutdownStarted |
Verbose | Exporter shutting down... |
ShutdownComplete |
Verbose | Exporter shutdown complete |
Warning and error events are always emitted regardless of the debug flag:
| Event | Level | Message |
|---|---|---|
ExportFailed |
Warning | OTLP export failed: HTTP {status} {reason} - {body} |
ExportException |
Error | OTLP export threw: {type}: {message} |
The ExportTimeout option controls the per-request timeout for OTLP exports (default: 30 seconds). If exports are timing out in your environment (e.g. CI runners, serverless cold starts), increase it:
opts.ExportTimeout = TimeSpan.FromSeconds(60);cd tests/Tracentic.Sdk.Tests
# All tests
dotnet test
# A single test class
dotnet test --filter "FullyQualifiedName~ScopeTests"
# A single test
dotnet test --filter "FullyQualifiedName~CreateChild_SetsParentId"| File | What it covers |
|---|---|
ScopeTests.cs |
Scope creation, nesting, cross-service linking, correlation IDs |
GlobalContextTests.cs |
Global context set/get/remove, per-request lifecycle, thread safety |
AttributeMergeTests.cs |
Three-layer merge priority (global < scope < span), collision resolution |
AttributeLimitsTests.cs |
Attribute count caps, key/value length truncation, platform maximums |
CostCalculationTests.cs |
Pricing lookup, known/unknown models, case sensitivity |
RequestMiddlewareTests.cs |
Middleware attribute injection, cleanup after request completion |
CollectorTests.cs |
Collector configuration, null API key handling |