Skip to content

bernalo-lab/explain-error

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ExplainError

A standalone structured judgement service that ingests error signals and emits classification, severity, confidence, evidence, and advisory action bias outputs.

ExplainError externalises uncertainty into explicit, machine-readable signals designed for high-accountability incident environments.


Why ExplainError Exists

During incidents, teams don’t struggle to see errors.

They struggle to decide:

  • Is this critical?
  • Is this noise?
  • Should we escalate?
  • Can we retry safely?
  • How confident are we in that judgement?

Most systems expose telemetry.

Very few expose structured judgement.

ExplainError formalises incident reasoning into explicit outputs.


Core Outputs

For each ingested error signal, ExplainError emits:

  • Classification (failure category)
  • Severity (impact signal)
  • Confidence Score (0–1 alignment signal)
  • Confidence Rationale (structured explanation)
  • Machine-Readable Evidence Markers
  • Advisory Action Bias

These outputs are designed to accelerate triage clarity while preserving human decision authority.


Positioning

ExplainError is:

  • Not an observability platform
  • Not log aggregation
  • Not automated remediation
  • Not escalation enforcement

It is an independent judgement layer.

It introduces explicit confidence modelling and evidence traceability into incident workflows.


Ecosystem Context

ExplainError forms the final layer of a structured incident intelligence initiative:

Incident Engineering Patterns
Conceptual frameworks for analysing production failure shape.

AWS Log Search Recipes – Preview
Tactical examples of disciplined log interrogation.

Structured Incident Calibration Pilot
Controlled evaluation of confidence alignment against historical severity outcomes.

Raw Signals
→ Structured Investigation Patterns
→ Formalised Judgement Signals

ExplainError implements the final stage.


Structured Incident Calibration Pilot

ExplainError is currently being evaluated through structured pilot engagements.

The pilot measures:

  • Alignment between predicted severity and actual post-incident severity
  • Confidence gradient behaviour
  • High-confidence misclassification rate
  • Low-confidence critical miss rate

The objective is to validate confidence signal integrity — not automate decisions.

For regulated teams evaluating structured judgement signals in incident environments, controlled pilot discussions are available.


Live Demonstration

Interactive demo:
👉 https://bernalo-lab.github.io/explain-error/


Design Principle

Human judgement remains authoritative.

Structured signals accelerate clarity.

Automation is optional.
Transparency is not.

About

Explain errors with confidence and severity signals.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages