Skip to content

Optimize hot-path event parsing and metric update efficiency #36

@darthfork

Description

@darthfork

Summary

The exporter does more allocation and parsing work than necessary on the hot webhook path. Request bodies are fully read into memory, entire event payloads are unmarshaled even when only a small subset of fields is used, and metric updates allocate label maps repeatedly.

This issue tracks a performance-oriented cleanup of the event processing path.

Why this matters

  • High webhook throughput amplifies small per-request inefficiencies.
  • Avoidable allocations increase GC pressure.
  • Default duration histogram buckets are not tuned for CI workflows and jobs.
  • Body size is not capped before processing.

Goals

  • Reduce allocations and unnecessary parsing.
  • Bound memory usage per request.
  • Tune duration metrics for CI-like workloads.
  • Keep the hot path simple and efficient.

Suggested scope

  • Cap webhook body size.
  • Use minimal payload structs or more targeted decoding.
  • Replace label-map based metric updates with lower-allocation alternatives.
  • Tune histogram buckets for workflow/job durations.

Child issues

  • Add request body size limits and safer request handling.
  • Reduce JSON parsing overhead with minimal event structs.
  • Reduce allocation churn in metric updates.
  • Tune workflow/job duration histograms for CI durations.

Acceptance criteria

  • The hot path avoids unbounded body reads.
  • Metric update allocations are reduced.
  • Decoding work is proportional to the fields actually consumed.
  • Duration metrics are tuned for real-world CI timing ranges.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions