Skip to content

Add structured logging with tracing #22

@cbaugus

Description

@cbaugus

Summary

The application currently uses println! statements for output, which lacks structure, log levels, and observability features. Replace with the tracing crate for structured, leveled logging that integrates well with observability platforms.

Current State

// Current logging approach
println!("Starting load test with {} concurrent tasks", max_concurrent);
println!("Target URL: {}", target_url);
println!("Load test finished. requests_total: {}", requests_total);

Problems:

  • No log levels (info, warn, error, debug)
  • No structured fields for filtering/searching
  • No timestamps
  • Can't adjust verbosity at runtime
  • Difficult to integrate with log aggregation systems

Proposed Solution

Use tracing + tracing-subscriber for structured logging:

use tracing::{info, warn, error, debug, instrument, span, Level};

#[tokio::main]
async fn main() {
    // Initialize tracing subscriber
    tracing_subscriber::fmt()
        .with_env_filter(
            EnvFilter::from_default_env()
                .add_directive("rust_loadtest=info".parse().unwrap())
        )
        .with_target(true)
        .with_thread_ids(true)
        .json() // Optional: JSON output for log aggregation
        .init();

    info!(
        target_url = %target_url,
        max_concurrent = max_concurrent,
        load_model = ?load_model,
        "Starting load test"
    );
}

Implementation Details

Dependencies

Add to Cargo.toml:

[dependencies]
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "json"] }

Log Levels

Level Use Case
error! Request failures, configuration errors, TLS errors
warn! High latency responses, rate limit adjustments, non-critical issues
info! Test start/stop, configuration summary, final metrics
debug! Individual requests, rate calculations, phase transitions
trace! Very detailed internal state (optional)

Structured Fields

// Good: Structured fields
info!(
    status_code = response.status().as_u16(),
    latency_ms = duration.as_millis(),
    worker_id = worker_id,
    "Request completed"
);

// Avoid: String interpolation
info!("Request completed with status {} in {}ms", status, latency);

Instrumentation

Use #[instrument] attribute for automatic span creation:

#[instrument(skip(client), fields(worker_id))]
async fn run_worker(
    worker_id: u32,
    client: &Client,
    config: &Config,
) -> WorkerMetrics {
    debug!("Worker starting");
    // ... worker logic
    info!(requests_made = count, "Worker finished");
}

Environment Configuration

Control log level via environment variable:

RUST_LOG=info cargo run                    # Info and above
RUST_LOG=debug cargo run                   # Debug and above
RUST_LOG=rust_loadtest=debug cargo run     # Debug only for this crate
RUST_LOG=rust_loadtest::worker=trace cargo run  # Trace for worker module

Output Formats

Support both human-readable and JSON output:

let subscriber = tracing_subscriber::fmt();

if std::env::var("LOG_FORMAT").unwrap_or_default() == "json" {
    subscriber.json().init();
} else {
    subscriber.init();
}

Human-readable:

2024-01-15T10:30:45.123Z  INFO rust_loadtest: Starting load test target_url="https://api.example.com" max_concurrent=10

JSON (for log aggregation):

{"timestamp":"2024-01-15T10:30:45.123Z","level":"INFO","target":"rust_loadtest","message":"Starting load test","target_url":"https://api.example.com","max_concurrent":10}

Acceptance Criteria

  • Replace all println! with appropriate tracing macros
  • Add tracing-subscriber initialization in main()
  • Support RUST_LOG environment variable for log level control
  • Add structured fields to all log messages
  • Use #[instrument] on key async functions
  • Support JSON output format via LOG_FORMAT=json
  • Document logging configuration in README

Example Logging Points

// Startup
info!(config = ?config, "Load test configuration loaded");

// Per-request (debug level to avoid spam)
debug!(
    url = %url,
    method = %method,
    status = status_code,
    latency_ms = latency.as_millis(),
    "Request completed"
);

// Rate adjustment
debug!(
    current_rps = current_rps,
    target_rps = target_rps,
    phase = ?phase,
    "Adjusting request rate"
);

// Errors
error!(
    error = %e,
    url = %url,
    "Request failed"
);

// Completion
info!(
    total_requests = total,
    duration_secs = duration.as_secs(),
    avg_latency_ms = avg_latency,
    "Load test completed"
);

Benefits

  • Observability: Integrates with Jaeger, Datadog, etc.
  • Debugging: Filter logs by level, module, or field values
  • Production-ready: JSON output for log aggregation
  • Performance: Zero-cost when disabled at compile time
  • Async-aware: Properly tracks context across async boundaries

Priority

Low - Nice to have for better debugging and observability, but not blocking for core functionality.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions