A standalone structured judgement service that ingests error signals and emits classification, severity, confidence, evidence, and advisory action bias outputs.
ExplainError externalises uncertainty into explicit, machine-readable signals designed for high-accountability incident environments.
During incidents, teams don’t struggle to see errors.
They struggle to decide:
- Is this critical?
- Is this noise?
- Should we escalate?
- Can we retry safely?
- How confident are we in that judgement?
Most systems expose telemetry.
Very few expose structured judgement.
ExplainError formalises incident reasoning into explicit outputs.
For each ingested error signal, ExplainError emits:
- Classification (failure category)
- Severity (impact signal)
- Confidence Score (0–1 alignment signal)
- Confidence Rationale (structured explanation)
- Machine-Readable Evidence Markers
- Advisory Action Bias
These outputs are designed to accelerate triage clarity while preserving human decision authority.
ExplainError is:
- Not an observability platform
- Not log aggregation
- Not automated remediation
- Not escalation enforcement
It is an independent judgement layer.
It introduces explicit confidence modelling and evidence traceability into incident workflows.
ExplainError forms the final layer of a structured incident intelligence initiative:
• Incident Engineering Patterns
Conceptual frameworks for analysing production failure shape.
• AWS Log Search Recipes – Preview
Tactical examples of disciplined log interrogation.
• Structured Incident Calibration Pilot
Controlled evaluation of confidence alignment against historical severity outcomes.
Raw Signals
→ Structured Investigation Patterns
→ Formalised Judgement Signals
ExplainError implements the final stage.
ExplainError is currently being evaluated through structured pilot engagements.
The pilot measures:
- Alignment between predicted severity and actual post-incident severity
- Confidence gradient behaviour
- High-confidence misclassification rate
- Low-confidence critical miss rate
The objective is to validate confidence signal integrity — not automate decisions.
For regulated teams evaluating structured judgement signals in incident environments, controlled pilot discussions are available.
Interactive demo:
👉 https://bernalo-lab.github.io/explain-error/
Human judgement remains authoritative.
Structured signals accelerate clarity.
Automation is optional.
Transparency is not.