Origin makes your compiler catch a problem that every AI system has right now.
When your model isn't sure about something — low confidence, out of its training domain, potentially hallucinating — that uncertainty usually lives in a float field that someone might forget to check. It reaches production silently. You find out three weeks later from user complaints.
Origin changes that. When a function returns an uncertain value, you can't use it without handling the uncertainty first. You can't forget. You can't .unwrap() your way past it.
But here's what makes it different from just using Result or try/except: when something goes wrong, Origin doesn't throw away what the model computed. It keeps it. So instead of "the model failed, here's an error code," you get "the model leaned toward pneumonia but wasn't confident enough to say so — here's what it knew when it stopped."
That's the difference between a system that says "I don't know" and a system that says "here's my full reasoning up to the point where I ran out of certainty."
Python — runtime enforcement:
pip install origin-langfrom origin_lang import Value, boundary
@boundary
class LowConfidence:
confidence: float
threshold: float
# The value the model computed is still here
# even when confidence is too low to act on
match infer(text):
case Value.Interior(diagnosis):
treat(diagnosis)
case Value.Boundary(LowConfidence(confidence=c), last=diagnosis):
refer_to_specialist(diagnosis, c)
# ^^^^^^^^^
# still hereRust — compile-time enforcement:
[dependencies]
origin-lang = "0.1"#[boundary_check]
fn diagnose(input: &str) -> String {
match infer(input) {
Value::Interior(diagnosis) => use_it(diagnosis),
Value::Boundary { reason: LowConfidence { .. }, last } => refer_to_specialist(last),
// ^^^^
// always here
}
}Python gets you the concept today. Rust gets you the compiler guarantee.
✓ [1] pharmacokinetics: Distribution { concentration: 7.14, half_life_hours: 4.0 }
✓ [2] dosage calculation: Dosage { amount_mg: 571.4, frequency_hours: 4.0 }
✓ [3] AI diagnosis: Diagnosis { condition: "ACS", confidence: 0.92 }
✗ [4] contraindication check: SafeDrug { name: "Aspirin", ... }
Three layers completed. The fourth hit a boundary. This is not a stack trace. A stack trace tells you where the code was. This is a reasoning trace. It tells you what the computation knew.
| What survives a boundary | Python | Rust / Result<T,E> |
Origin / Value<T,B> |
|---|---|---|---|
| That something went wrong | yes | yes | yes |
| Which boundary was crossed | sometimes | yes | yes |
| What the computation last knew | opt-in | impossible | guaranteed |
| Full reasoning chain | no | no | yes |
Python lets a good engineer add last_known to an exception class. The language has no opinion — both versions compile, both ship. Result<T,E> is Ok(T) | Err(E) — when you return Err, the T is dropped. That's not a style choice. Value<T,B> is Interior(T) | Boundary { reason: B, last: T } — the T is always there. The compiler guarantees it.
This is not a performance claim. It is a type system fact. Verified across five industries, fifteen scenarios.
Origin matches the best-case Result in branch count while preserving what Result cannot carry — and beats real-world Result where domain error types differ.
Every model you've ever shipped had a confidence score that lived in a float field someone forgot to check.
Every language you've ever used had its own mechanism for this:
- C gave you
NULL - Java gave you
NullPointerException - JavaScript gave you
null,undefined, andNaN - IEEE 754 gave you quiet NaN propagation
- Rust gave you
Option<T> - Python gave you
None - SQL gave you three-valued logic
- Physics gave you renormalization
- Mathematics gave you "undefined"
Different mechanisms. Different languages. Different centuries. Same problem: the value crossed a boundary, and nobody told you.
These are all the same thing.
A value is either in safe territory — the interior — or it has crossed the edge of its domain — a boundary. That's it. One distinction.
Not presence vs absence. Not Some vs None. Position in the domain. Are you in safe territory, or are you at the edge where the rules change?
Division by zero isn't a missing value. The denominator exists — it's at the boundary. A model's low-confidence output isn't absent — it's at the boundary between knowledge and uncertainty. The speed of light isn't nothing — it's the boundary your velocity cannot cross.
One distinction. Every domain.
Origin gives you the mechanism. You define what "boundary" means for your domain.
#[derive(Debug, Clone, BoundaryKind)]
pub enum InferBoundary {
LowConfidence { confidence: f64, threshold: f64 },
Hallucinated { claim: String, evidence_score: f64 },
OutOfDomain { distance: f64 },
}This is yours. A trading desk defines ExposureLimitExceeded. A power grid defines FrequencyDeviation. An AV stack defines SensorDegraded. Origin doesn't decide where the boundary is — your domain does. Origin makes sure the compiler enforces it.
The program that won't compile:
#[boundary_check]
fn diagnose(input: &str) -> Diagnosis {
infer(input).unwrap()
}error: boundary not handled: `.unwrap()` bypasses boundary checking
A Value<T> may be at a boundary. Handle it explicitly:
match value {
Value::Interior(v) => /* safe to use */,
Value::Boundary { reason, last } => /* handle each kind */,
}
or use `.or(fallback)` to provide a default value
And if you try to dodge it with a wildcard:
#[boundary_check]
fn diagnose(input: &str) -> String {
match infer(input) {
Value::Interior(d) => use_it(d),
Value::Boundary { .. } => "uncertain".to_string(), // COMPILE ERROR
}
}error: boundary kind ignored: wildcard `Boundary { .. }` catches all boundary kinds
Each boundary kind exists for a reason. Handle them individually.
A wildcard boundary arm is the new `.unwrap()` — it acknowledges
the boundary exists and then ignores what it means.
The cost of carrying the reason and the residual is zero.
Value<T, B> and Option<T> run at the same speed on real inference (DistilBERT-SST-2, Apple Silicon, 512 inferences). Same enum representation. Same branch prediction. The additional fields in the boundary variant exist only when you're at the boundary. When you're interior, the cost is identical.
You are not paying for the information. You are getting it free.
Zero is simultaneously the additive identity and the boundary of division. That ambiguity — one value, two roles — is the same ambiguity in every system that handles undefined. NULL, None, NaN, renormalization, three-valued logic — 97 independent workarounds across four fields, all handling the same boundary, none of them unified. Origin resolves it at the type level.
The two-sorted arithmetic repo is the proof that the distinction is necessary. This repo is the proof that the distinction is enforceable. The Lean theorems prove you cannot have a total extension of bounded arithmetic without the two-sort split. The Rust compiler proves you cannot ship code that ignores the split without a compile error. The math forces the design. The design forces the code.
A soft philosophical problem — zero is multiple things, the boundary between knowledge and uncertainty has no name — turned into a hard compiler error.
| Directory | What it does |
|---|---|
src/ |
The core: Value<T, B>, Chain, propagate(), trace(), built-in boundary kinds |
macros/ |
Proc-macros: #[derive(BoundaryKind)] and #[boundary_check] |
python/ |
Pure Python package (pip install origin-lang), same concept, runtime enforcement |
demo/ |
Five industry cascades: healthcare, finance, energy, vehicles, legal |
comparison/ |
Python vs Result vs Origin, structural argument table, branch count comparison |
transpiler/ |
originc: Origin syntax to Rust source (tokenizer, parser, emitter) |
bench/ |
Energy benchmark: DistilBERT-SST-2 via candle, Value vs Option |
spec/ |
Type system specification derived from the Lean 4 proofs |
tests/ |
Cross-crate proof: boundary kind in crate A, value in B, enforcement in C |
RESEARCH.md |
For AI safety researchers: Origin and the total function problem |
A wildcard boundary arm is the new .unwrap().