Skip to content

jmg049/explainable

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

explainable

A zero-overhead educational layer for Rust libraries.

explainable lets a domain crate give its users a step-by-step, pedagogical view of an operation chain --- text explanations, before/after visuals, or both --- without touching the hot path, changing any existing call site, or adding any runtime cost unless the feature is explicitly invoked.

cargo add explainable

Motivation

High-performance Rust libraries operate correctly and efficiently, but they offer no in-library mechanism for a user to understand what any given operation does, what its intermediate steps are, or what the data looks like before and after. This gap is the pedagogical problem explainable addresses.

The goal:

  • Lives entirely within the Rust core --- no separate tutorial crate, no external documentation that can rot
  • Adds a single entry point to any type --- .explaining(ExplainMode) --- and nothing more
  • Leaves every existing call site valid and every existing operation completely untouched
  • Imposes negligible overhead --- the hot path is unaffected; explanation machinery is never exercised unless explicitly invoked
  • Scales to any crate that opts in with four lines of code

Workspace layout

explainable/
├── Cargo.toml                  ← workspace root + the explainable library crate
├── src/
│   └── lib.rs                  ← public traits, types, and macro re-export
└── explainable-macros/
    ├── Cargo.toml              ← proc-macro = true
    └── src/
        └── lib.rs              ← #[explainable] attribute macro implementation

explainable-macros is an implementation detail. Depend on explainable and use explainable::explainable --- do not depend on explainable-macros directly.


User-facing API

The complete change to how a user interacts with a participating crate is one additional call to open the chain. Everything else is identical.

Normal use --- unchanged:

audio.normalize();
audio.scale(0.5);
audio.trim(100, 200);

Educational use:

use audio_samples::AudioProcessingExt; // extension trait generated by the macro

let (result, _explanations) = audio
    .explaining(ExplainMode::Both)
    .normalize()
    .scale(0.5)
    .trim(100, 200)
    .explain();

.explain() at the end of the chain:

  • Surfaces all accumulated explanations --- text to terminal, visuals via the domain crate's renderer
  • Returns (final_value, Vec<Explanation>) --- the full explanation record for later inspection

ExplainMode variants:

audio.explaining(ExplainMode::Text)    // pedagogical text only
audio.explaining(ExplainMode::Visual)  // before/after plot only
audio.explaining(ExplainMode::Both)    // text and visual

How a domain crate opts in

Four steps. The existing implementation is never touched.

1. Add the dependency

[dependencies]
explainable = { version = "0.1.0"}

2. Annotate your operation trait with #[explainable]

use explainable::explainable;

#[explainable]
pub trait AudioProcessing {
    fn normalize(&self) -> Result<AudioSamples, AudioError>;
    fn scale(&self, factor: f64) -> Result<AudioSamples, AudioError>;
    fn trim(&self, start: usize, end: usize) -> Result<AudioSamples, AudioError>;
}

The macro leaves the trait itself completely unchanged. It generates three additional items (see What the macro generates).

3. Implement the two explainable traits

use explainable::{ExplainDisplay, RenderVisual, Explainable};

// Rendering surface --- owns the plotting/display logic
struct AudioSamplesVisual { html: String }

impl ExplainDisplay for AudioSamplesVisual {
    fn display(&self) {
        open_in_browser(&self.html); // existing infrastructure
    }
}

// Produce a before/after visual for any operation
impl RenderVisual for AudioSamples {
    fn render_visual(before: &Self, after: &Self) -> Box<dyn ExplainDisplay> {
        Box::new(AudioSamplesVisual {
            html: plot_before_after(before, after),
        })
    }
}

// One line to opt the type into the system
impl Explainable for AudioSamples {}

4. Implement the companion text trait

The macro generates a <TraitName>ExplainText trait with one method per operation. Each method receives the before and after state so that real runtime values can be woven into the explanation:

impl AudioProcessingExplainText for AudioSamples {
    fn explain_text_normalize(before: &Self, after: &Self) -> String {
        format!(
            "Normalization scales every sample so the peak absolute value \
             becomes 1.0. Your signal had a peak of {:.4}, so every sample \
             was divided by that value, mapping your range to [-1.0, 1.0].",
            before.peak()
        )
    }

    fn explain_text_scale(before: &Self, after: &Self) -> String {
        format!(
            "Scaling multiplies every sample by a constant factor. \
             Your peak went from {:.4} to {:.4}.",
            before.peak(),
            after.peak()
        )
    }

    fn explain_text_trim(before: &Self, after: &Self) -> String {
        format!(
            "Trim discards samples outside the requested window. \
             Length went from {} to {} samples.",
            before.len(),
            after.len()
        )
    }
}

What the macro generates

Given #[explainable] on a trait Foo, three items are emitted alongside the unmodified original trait.

FooExplainText --- companion text trait

pub trait FooExplainText: Explainable + Foo {
    fn explain_text_some_op(before: &Self, after: &Self) -> String;
    // one method per operation
}

Implemented by the domain crate author to supply pedagogical text. The design constraint is that explanations use real runtime values from both before and after --- neither pure abstraction ("normalization divides by the peak") nor pure reflection ("peak was 0.87"), but both together.

FooExt --- extension trait for Explaining<T>

pub trait FooExt {
    fn some_op(&mut self, /* original params */) -> &mut Self;
    // one method per operation
}

Bring this into scope to call operations on an explaining chain. The extension trait pattern is required because Explaining<T> is defined in explainable --- adding inherent methods to a foreign type from a downstream crate would violate the orphan rule. A locally-defined extension trait is the canonical Rust solution.

Blanket impl FooExt for Explaining<T>

impl<T: Explainable + Foo + FooExplainText> FooExt for Explaining<T> {
    fn some_op(&mut self, /* params */) -> &mut Self {
        let before = self.inner.clone();
        self.inner = self.inner.some_op(/* params */); // calls the real operation
        // builds Explanation from FooExplainText + RenderVisual
        // pushes onto self.explanations
        self
    }
}

Each generated method:

  1. Clones inner as before
  2. Calls through to the real, unmodified operation
  3. Builds an Explanation --- text from FooExplainText, visual from RenderVisual --- conditional on the active ExplainMode
  4. Pushes the Explanation onto self.explanations
  5. Returns &mut Self for method chaining

New operations added to the annotated trait automatically get explaining variants --- zero maintenance.

Return-type convention

For the generic self.inner = self.inner.method(...) assignment to compile, trait methods should return either Self or a Result<Self, E> / type-alias ending in "Result". Result-returning methods are unwrapped automatically. Methods that change the output type (e.g. FFT returning a Spectrogram) are not yet handled and will produce a compiler error at the use site.


Architecture

┌─────────────────────────────────────────────┐
│  explainable                                │
│                                             │
│  ExplainMode     (enum)                     │
│  ExplainDisplay  (trait --- opaque surface) │
│  RenderVisual    (trait --- domain renders) │
│  Explainable     (marker trait)             │
│  Explanation     (struct --- one per op)    │
│  Explaining<T>   (struct --- the chain)     │
│  #[explainable]  (proc-macro re-export)     │
└────────────────┬────────────────────────────┘
                 │  implements
                 ▼
┌─────────────────────────────────────────────┐
│  audio_samples (or any domain crate)        │
│                                             │
│  impl RenderVisual for AudioSamples { … }   │
│  impl Explainable  for AudioSamples {}      │
│                                             │
│  #[explainable]                             │
│  trait AudioProcessing { … }                │
│                                             │
│  impl AudioProcessingExplainText            │
│      for AudioSamples { … }                 │
└─────────────────────────────────────────────┘

explainable has no dependencies on any domain-specific library.W It defines the interfaces. Domain crates own their rendering infrastructure entirely.


Open problems

# Problem Status
8.1 ExplainMode matching --- currently per-method; could be pushed into a construction helper for uniformity Open
8.2 Macro handling of pedagogically significant parameters (scale(factor), trim(start, end)) --- parameters are passed through correctly but not yet surfaced in generated explanation text Open
8.3 Operations that change the output type (FFT → Spectrogram) --- wrapper type must transition; mechanism unresolved Open
8.4 Feature flagging --- whether the system compiles away under #[cfg(feature = "educational")] needs working through with the macro Open

License

MIT

Contributing

TODO

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages