OpenTelemetry distributed tracing integration for Robot Framework test execution.
robotframework-tracer is a Robot Framework listener plugin that automatically creates distributed traces, logs, and metrics for your test execution using OpenTelemetry. It captures the complete test hierarchy (suites → tests → keywords) as spans and exports them to any OpenTelemetry-compatible backend like Jaeger, Grafana Tempo, SigNoz, or Zipkin.
This enables you to:
- Visualize test execution flow with detailed timing information
- Debug test failures by examining the complete execution trace with correlated logs
- Analyze performance and identify slow keywords or tests
- Monitor test health with metrics dashboards and alerting
- Correlate tests with application traces in distributed systems
- Track test trends across CI/CD pipelines
- Propagate trace context to your System Under Test (SUT)
- See running tests live in trace viewers during pabot parallel execution
The tracer implements the Robot Framework Listener v3 API and creates OpenTelemetry spans for each test execution phase:
Suite Span (root)
├── Suite Setup (SETUP span)
├── Test Case Span
│ ├── Keyword Span
│ │ └── Nested Keyword Span
│ └── Keyword Span
├── Test Case Span
│ └── Keyword Span
└── Suite Teardown (TEARDOWN span)
Each span includes rich metadata: test names, tags, status (PASS/FAIL), timing, arguments, and error details.
Additionally:
- Logs are sent via OpenTelemetry Logs API with trace correlation
- Metrics are automatically emitted for test execution analysis
pip install robotframework-tracer# Clone the repository
git clone <repository-url>
cd robotframework-tracer
# Create and activate virtual environment
python3 -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install in development mode
pip install -e ".[dev]"See docs/DEVELOPMENT.md for detailed development setup instructions.
docker run -d --name jaeger \
-p 16686:16686 \
-p 4318:4318 \
jaegertracing/all-in-one:latest# Basic usage (uses default endpoint localhost:4318)
robot --listener robotframework_tracer.TracingListener tests/
# With environment variables (recommended for custom endpoints)
export OTEL_EXPORTER_OTLP_ENDPOINT=http://jaeger:4318/v1/traces
export OTEL_SERVICE_NAME=my-tests
robot --listener robotframework_tracer.TracingListener tests/
# With inline options (colon-separated key=value pairs)
robot --listener "robotframework_tracer.TracingListener:service_name=my-tests:capture_logs=true" tests/
# With custom endpoint (URL colons are automatically handled)
robot --listener "robotframework_tracer.TracingListener:endpoint=http://jaeger:4318/v1/traces:service_name=my-tests" tests/Note: Robot Framework splits listener arguments on
:. Use colons to separate options. URLs containing://are automatically reconstructed.
Open http://localhost:16686 in your browser to see your test traces in Jaeger UI.
The tracer supports two forms of trace context propagation:
When the TRACEPARENT environment variable is set (following the W3C Trace Context standard), the suite span automatically becomes a child of the external parent trace. This enables:
- CI/CD correlation: A pipeline step creates a parent span and exports
TRACEPARENTbefore running tests - Parallel execution: Tools like pabot can use a wrapper script to create a parent span and propagate context to worker processes
- Nested orchestration: Any process that sets
TRACEPARENTin the environment before invoking Robot Framework
# Example: set by a CI pipeline or wrapper script
export TRACEPARENT="00-4bf92f3577b34da6a3ce929d0e0e4736-00f067aa0ba902b7-01"
export TRACESTATE="vendor1=value1" # optional
robot --listener robotframework_tracer.TracingListener tests/The suite span will appear as a child of trace 4bf92f3577b34da6a3ce929d0e0e4736 in your tracing backend.
The tracer automatically makes trace context available as Robot Framework variables for propagating to your System Under Test:
*** Test Cases ***
Test API With Distributed Tracing
# HTTP headers automatically include trace context
${response}= POST http://my-sut/api
... json={"data": "test"}
... headers=${TRACE_HEADERS}
# For custom protocols, use individual components
${diameter_msg}= Create Diameter Request
... trace_id=${TRACE_ID}
... span_id=${SPAN_ID}Available variables:
${TRACE_HEADERS}- HTTP headers dictionary${TRACE_ID}- 32-character hex trace ID${SPAN_ID}- 16-character hex span ID${TRACEPARENT}- W3C traceparent header${TRACESTATE}- W3C tracestate header
See docs/trace-propagation.md for complete examples.
robot --listener robotframework_tracer.TracingListener tests/robot --listener robotframework_tracer.TracingListener:endpoint=http://jaeger:4318/v1/traces tests/robot --listener "robotframework_tracer.TracingListener:endpoint=http://jaeger:4318/v1/traces,service_name=my-tests" tests/robot --listener "robotframework_tracer.TracingListener:\
endpoint=http://localhost:4318/v1/traces,\
service_name=robot-tests,\
protocol=http,\
capture_arguments=true,\
max_arg_length=200" tests/export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
export OTEL_SERVICE_NAME=robot-framework-tests
robot --listener robotframework_tracer.TracingListener tests/| Option | Default | Description |
|---|---|---|
endpoint |
http://localhost:4318/v1/traces |
OTLP endpoint URL |
service_name |
rf |
Service name in traces. Use auto to derive from suite name (ideal for pabot) |
protocol |
http |
Protocol: http or grpc |
span_prefix_style |
none |
Span prefix style: none, text, emoji |
capture_arguments |
true |
Capture keyword arguments |
max_arg_length |
200 |
Max length for arguments |
capture_logs |
false |
Capture log messages via Logs API |
capture_metrics |
true |
Emit OpenTelemetry metrics for test execution |
log_level |
INFO |
Minimum log level (DEBUG, INFO, WARN, ERROR) |
max_log_length |
500 |
Max length for log messages |
sample_rate |
1.0 |
Sampling rate (0.0-1.0, 1.0 = no sampling) |
trace_output_file |
`` | Write spans as OTLP JSON to local file (auto for suite-name + trace-ID naming) |
trace_output_format |
json |
Output format: json or gz (gzip-compressed) |
trace_output_filter |
`` | Output filter preset (minimal, full) or path to a custom filter .json file |
Trace file import: The output file can be imported into any OTLP-compatible backend (Jaeger, Tempo, etc.) by POSTing each line to the OTLP HTTP endpoint. See docs/configuration.md for details.
Each span includes relevant Robot Framework metadata:
Suite spans:
rf.suite.name- Suite namerf.suite.source- Suite file pathrf.suite.id- Suite IDrf.version- Robot Framework version
Test spans:
rf.test.name- Test case namerf.test.id- Test IDrf.test.tags- Test tagsrf.test.lineno- Source line number (RF 5+)rf.status- PASS/FAIL/SKIPrf.elapsed_time- Execution time
Keyword spans:
rf.keyword.name- Keyword namerf.keyword.type- SETUP/TEARDOWN/KEYWORDrf.keyword.library- Library namerf.keyword.args- Arguments (if enabled)rf.keyword.lineno- Source line number (RF 5+)rf.status- PASS/FAIL
When capture_logs=true, Robot Framework log messages are sent to the OpenTelemetry Logs API and automatically correlated with traces:
Log Attributes:
body- Log message textseverity_text- Log level (INFO, WARN, ERROR, FAIL)severity_number- Numeric severity (9=INFO, 13=WARN, 17=ERROR, 21=FAIL)trace_id- Correlated trace IDspan_id- Correlated span ID (keyword/test that generated the log)rf.log.level- Original Robot Framework log level
Endpoints:
- Traces:
/v1/traces(OTLP) - Logs:
/v1/logs(OTLP) - Metrics:
/v1/metrics(OTLP)
Logs appear in your observability backend's Logs UI with full trace correlation, enabling you to:
- Jump from log → trace
- Jump from trace → related logs
- Filter logs by trace ID
- View logs in context of test execution
The tracer automatically emits OpenTelemetry metrics for test execution analysis:
Test Metrics:
rf.tests.total- Total tests executed (with suite dimension)rf.tests.passed- Tests that passedrf.tests.failed- Tests that failed (with suite and tag dimensions)rf.tests.skipped- Tests that were skippedrf.test.duration- Test execution time histogram (with suite and status dimensions)
Suite Metrics:
rf.suite.duration- Suite execution time histogram (with suite and status dimensions)
Keyword Metrics:
rf.keywords.executed- Total keywords executed (with type dimension)rf.keyword.duration- Keyword execution time histogram (with keyword, type, and status dimensions)
Metrics enable:
- Dashboards - Visualize test health and trends over time
- Alerting - Alert when pass rate drops or execution time increases
- Performance Analysis - Track test execution time trends
- Failure Analysis - Group failures by suite or tag
Metrics are sent to /v1/metrics endpoint and share the same service name and resource attributes as traces for correlation.
Works with any OpenTelemetry-compatible backend:
- Jaeger - Open source tracing platform
- Grafana Tempo - High-scale distributed tracing
- Zipkin - Distributed tracing system
- AWS X-Ray - AWS distributed tracing
- Honeycomb - Observability platform
- Datadog - Monitoring and analytics
See docs/backends.md for backend-specific setup guides.
- Python 3.8+
- Robot Framework 6.0+
- OpenTelemetry SDK
- jsonschema (for output filter validation)
- Architecture - Design and architecture details
- Implementation Plan - Development roadmap
- Configuration Guide - Detailed configuration reference
- Attribute Reference - Complete attribute documentation
- Backend Setup - Backend-specific guides
See the examples/ directory for complete examples:
- Basic usage with Jaeger
- Advanced configuration
- CI/CD integration
- Multiple backend setups
Contributions are welcome! Please see docs/CONTRIBUTING.md for guidelines.
Apache License 2.0 - See docs/LICENSE for details.
Current Version: v0.4.0
Status: Production-ready with full observability (traces, logs, metrics)
Features:
- ✅ Distributed tracing with parent-child span relationships
- ✅ Log capture via OpenTelemetry Logs API with trace correlation
- ✅ Metrics emission for test execution analysis and monitoring
- ✅ Trace context propagation (inbound via TRACEPARENT, outbound to SUT)
- ✅ Support for parallel execution (pabot)
- ✅ Live test visibility during pabot runs (signal spans + immediate root export)
See docs/CHANGELOG.md for version history and docs/IMPLEMENTATION_PLAN.md for the development roadmap.
