AI Hallucination Detector β Formally Verified Trust Scoring for LLM Outputs
Analyze AI-generated text for hallucination risk. No API keys needed. No LLM calls required. Fast, local, formally verified, and color-coded terminal output.
Published package: https://crates.io/crates/truthlens API docs: https://docs.rs/truthlens
cargo install truthlens# Analyze text directly
truthlens "Einstein invented the telephone in 1876."
# Trust: 49% [ββββββββββββββββββββββββββββββ] HIGH
# π΄ Claim 1: 49% β specific verifiable claim β verify independently
# JSON output (for scripts/API integration)
truthlens --json "Python 4.0 has quantum computing support."
# Pipe from file or other commands
cat ai_response.txt | truthlens
# Pipe from clipboard (macOS)
pbpaste | truthlens
# Analyze ChatGPT/Claude output saved to file
curl -s "https://api.example.com/chat" | truthlens --json
# Compare multiple AI responses for contradictions
truthlens --consistency "response 1" "response 2" "response 3"
# Run built-in demo examples
truthlens --demouse truthlens::analyze;
let report = analyze("Einstein was born in 1879 in Ulm, Germany.");
println!("Trust: {:.0}% β {}", report.score * 100.0, report.risk_level);
// Trust: 52% β HIGH
// Access per-claim breakdown
for claim in &report.claims {
println!(" {} β {}", claim.text, claim.trust.risk_level);
}
// Access trajectory analysis
println!("Pattern: {}", report.trajectory.pattern);
println!("Damping: ΞΆβ{:.2}", report.trajectory.damping_estimate);
// JSON serialization
let json = serde_json::to_string_pretty(&report).unwrap();Paste N responses to the same prompt β TruthLens detects contradictions between them.
use truthlens::check_consistency;
let report = check_consistency(&[
"Einstein was born in 1879 in Ulm, Germany.",
"Einstein was born in 1879 in Munich, Germany.", // β contradiction
"Einstein was born in 1879 in Ulm, Germany.",
]);
println!("Consistency: {:.0}%", report.consistency_score * 100.0);
// Consistency: 75%
// Contradictions detected
for c in &report.contradictions {
println!("β οΈ {} vs {} β {}", c.claim_a, c.claim_b, c.conflict);
}
// β οΈ "Ulm, Germany" vs "Munich, Germany"
// Claims unique to one response (potential hallucination)
for u in &report.unique_claims {
println!("π Unique to response {}: {}", u.response_idx, u.text);
}# CLI: compare multiple responses as separate arguments
truthlens --consistency \
"Einstein was born in 1879 in Ulm, Germany." \
"Einstein was born in 1879 in Munich, Germany." \
"Einstein was born in 1879 in Ulm, Germany."
# Consistency: 70% [ββββββββββββββββββββββββββββββ]
# β Contradictions:
# Response 1 vs 2: "Ulm, Germany" vs "Munich, Germany"
# β
Consistent claims:
# 3/3 agree: einstein was born in: 1879
# JSON output
truthlens --consistency --json "resp1" "resp2" "resp3"
# Pipe JSON array from stdin
echo '["Python was created in 1991.", "Python was created in 1989."]' \
| truthlens --consistencypip install truthlensfrom truthlens import analyze, check_consistency, extract_claims, extract_entities
# Analyze text for hallucination risk
report = analyze("Einstein was born in 1879 in Ulm, Germany.")
print(f"Trust: {report['score']:.0%} β {report['risk_level']}")
# Per-claim breakdown
for claim in report["claims"]:
print(f" {claim['text']} β {claim['trust']['risk_level']}")
# Multi-response consistency check
result = check_consistency([
"Einstein was born in 1879 in Ulm.",
"Einstein was born in 1879 in Munich.",
])
print(f"Consistency: {result['consistency_score']:.0%}")
# Extract atomic claims
claims = extract_claims("Python was created in 1991. It is widely used.")
# Extract named entities
entities = extract_entities("Marie Curie won the Nobel Prize in 1903.")
print(entities) # ['1903', 'Marie Curie']# Install from Snap Store (Ubuntu/Linux)
sudo snap install truthlens
# Analyze text
truthlens "Einstein invented the telephone in 1876."
# JSON output
truthlens --json "Python was created in 1991."
# Compare multiple AI responses
truthlens --consistency \
"Einstein was born in Ulm." \
"Einstein was born in Munich."
# Entity verification (requires network)
truthlens --verify "Marie Curie won the Nobel Prize in 1903."
# Run demo examples
truthlens --demo
# Show help
truthlens --helpCross-reference named entities (people, places, dates) against Wikidata to boost or reduce trust scores.
# Install with verification support
cargo install truthlens --features verify
# Verify entities in a claim
truthlens --verify "Albert Einstein was born in 1879 in Ulm, Germany."
# Trust: 67% [ββββββββββββββββββββββββββββββ] MEDIUM
# π Verified: Albert Einstein (Q937) β birth year: 1879, birthplace: Ulm β
# Combine with JSON output
truthlens --verify --json "Marie Curie won the Nobel Prize in 1903."Note: The
--verifyflag requires theverifyfeature (adds theureqHTTP dependency). Without--features verify, TruthLens works fully offline with no network dependencies.
# Cargo.toml
[dependencies]
truthlens = "0.5"
# With entity verification
# truthlens = { version = "0.5", features = ["verify"] }TruthLens decomposes AI text into atomic claims and scores each for hallucination risk using linguistic signals β no LLM calls, no API keys, no external dependencies.
Input: "Python 4.0 was released in December 2025 with native quantum computing support."
Output: π΄ Trust: 49% [HIGH]
β specific verifiable claim β verify independently
β overconfident language without hedging
Text β atomic sentences β each is an independent claim to evaluate.
| Signal | What It Measures | Weight |
|---|---|---|
| Confidence | Overconfident language without hedging (hallucination red flag) | 35% |
| Hedging | Uncertainty markers ("might", "possibly") β correlates with lower hallucination | 25% |
| Specificity | How concrete/verifiable the claim is (numbers, names, dates) | 20% |
| Verifiability | Whether the claim contains fact-checkable entities | 15% |
| Consistency | Multi-sample agreement (optional, requires LLM) | 5% |
Signals are aggregated into a single trust score in [0.0, 1.0]:
| Score | Risk Level | Meaning |
|---|---|---|
| 0.75β1.0 | β LOW | Likely factual or appropriately hedged |
| 0.55β0.74 | Some uncertain claims, verify key facts | |
| 0.35β0.54 | π΄ HIGH | Multiple suspicious claims, verify everything |
| 0.0β0.34 | π CRITICAL | Likely contains hallucinations |
Passage score = 70% average + 30% worst claim. One bad claim drags down the whole passage.
- No LLM required β linguistic analysis only. Fast (microseconds), private (local), free.
- Hedging = good β unlike most "confidence detectors", we score hedged claims HIGHER. A model that says "might" is better calibrated than one that states falsehoods with certainty.
- Specificity is double-edged β specific claims are more useful but also more damaging if wrong. We flag them for independent verification.
- Formally verified β Lean 4 proofs guarantee score bounds, monotonicity, and composition properties.
signal_nonnegβ all signals β₯ 0weighted_contrib_boundedβ wΒ·s β€ wΒ·max when s β€ maxclamped_score_in_rangeβ final score β [0, 100] after clamptruthlens_weights_sumβ weights sum to 100%
signal_increase_improves_scoreβ improving a signal improves the scoretotal_score_improvesβ better signal + same rest = better totalgood_claim_improves_passageβ adding a good claim raises the average
passage_score_boundedβ 70%Β·avg + 30%Β·min β€ 100%Β·maxpassage_at_least_worstβ passage score β₯ 30% of worst claimscore_order_independentβ claim order doesn't affect passage scorescore_deterministicβ same inputs β same output (functional purity)
adjusted_score_boundedβ score + modifier stays bounded after clamptransitions_boundedβ direction changes β€ n_claims β 2damping_positiveβ damping estimate is always positive (stable system)penalty_still_nonnegβ score after penalty β₯ 0 after clamp
consistency_boundedβ consistency score β [0, 100] after clampcontradictions_boundedβ contradiction count β€ comparison pairsagreement_ratio_validβ agreement β€ total responsesagreeing_response_improvesβ adding agreement increases countcontradiction_symmetricβ if A contradicts B, B contradicts Aunique_boundedβ unique claims β€ total claims
verification_modifier_boundedβ modifier β [0, 15] (scaled) after clampcombined_modifier_boundedβ combined modifier β [-15, +15]adjusted_score_with_verificationβ score + verification modifier stays in [0, 100]adjusted_score_with_bothβ score + trajectory + verification modifier stays in [0, 100]entity_partitionβ verified + contradicted + unknown = totalverified_contradicted_disjointβ verified + contradicted β€ totalempty_verification_neutralβ no entities β zero modifierall_verified_maxβ all verified β maximum positive modifierall_contradicted_maxβ all contradicted β maximum negative modifiermore_verified_improvesβ adding verified entity increases modifier (monotonic)more_contradicted_worsensβ adding contradicted entity decreases modifier (monotonic)
"Albert Einstein was born on March 14, 1879, in Ulm, Germany."
β π΄ 52% HIGH β specific verifiable claim, verify independently
"Climate change might be linked to increased hurricane frequency.
Some researchers believe ocean temperatures could affect storm intensity.
It is possible that sea levels will rise over the next century."
β β
60% LOW β Trajectory: FLAT LOW (consistently cautious), trust bonus +10%
"Climate change might be linked to increased hurricane frequency."
β β οΈ 65% MEDIUM β appropriately hedged
"The Great Wall is exactly 21,196.18 kilometers long."
β π΄ 52% HIGH β overconfident without hedging; highly specific
"Various factors contribute to the situation."
β π΄ 40% HIGH β vague claim with low specificity
{
"score": 0.49,
"risk_level": "High",
"summary": "1 claims analyzed. 1 high-risk claims detected.",
"claims": [
{
"text": "Einstein invented the telephone in 1876.",
"trust": {
"score": 0.49,
"signals": {
"confidence": 0.5,
"specificity": 0.3,
"hedging": 0.5,
"verifiability": 0.7,
"consistency": null
},
"risk_level": "High"
}
}
]
}truthlens/
βββ .github/ # Automation and release workflows
β βββ workflows/
β βββ pypi-publish.yml
β βββ python-ci.yml
β βββ release.yml
β βββ rust-ci.yml
βββ rust/ # Core library + CLI
β βββ src/
β β βββ lib.rs # Public API: analyze(), check_consistency(), extract_*()
β β βββ claim.rs # Claim extraction + linguistic analysis
β β βββ scorer.rs # Trust scoring + signal aggregation
β β βββ trajectory.rs # Confidence trajectory analysis (v0.2)
β β βββ consistency.rs # Multi-response consistency checker (v0.3)
β β βββ entity.rs # Entity cross-reference with Wikidata (v0.4)
β β βββ main.rs # CLI: analyze, --consistency, --verify, --demo
β βββ tests/
β β βββ integration.rs # End-to-end integration tests
β βββ Cargo.toml
βββ python/ # Python bindings (v0.5)
β βββ src/lib.rs # PyO3 wrapper
β βββ truthlens/ # Python package
β β βββ __init__.py # Re-exports + docstrings
β β βββ __init__.pyi # Type stubs (PEP 561)
β β βββ py.typed # PEP 561 marker
β βββ tests/
β β βββ test_truthlens.py # Python test suite
β βββ Cargo.toml # cdylib crate
β βββ pyproject.toml # maturin build config
βββ lean/ # Formal proofs
β βββ TruthLens/
β β βββ ScoreBounds.lean # Score β [0, 1], weight sum, clamp
β β βββ Monotonicity.lean # Better signals β better score
β β βββ Composition.lean # Passage aggregation properties
β β βββ Trajectory.lean # Trajectory modifier bounds + correctness
β β βββ Consistency.lean # Contradiction bounds, agreement, symmetry
β β βββ Verification.lean # Entity verification modifier bounds (v0.4)
β βββ lakefile.lean
βββ snap/ # Snap package config (v0.5)
β βββ gui/
β β βββ truthlens.png # Snap store icon
β βββ snapcraft.yaml
βββ bridge/ # Lean β Rust proof/runtime mapping
βββ README.md
# Rust (default β no network dependencies)
cd rust
cargo test # unit + doc tests
cargo test --features verify # includes entity verification tests
# Python bindings
cd python
pip install maturin pytest
maturin develop # build + install locally
pytest tests/ -v # run Python tests
# Lean
cd lean
lake build # 6 proof modules, zero sorry- v0.1 β Linguistic analysis: claim extraction, hedging detection, specificity scoring
- v0.2 β Confidence trajectory: detects oscillating, flat, or convergent confidence patterns using second-order dynamical system modeling
- v0.3 β Multi-response consistency, CLI (
cargo install truthlens), colored output - v0.4 β Entity cross-reference: verify extracted entities against Wikidata SPARQL (optional
verifyfeature flag) - v0.5 β Python bindings (PyO3) β
pip install truthlens, Snap package - v0.6 β Claude Code / MCP integration: local stdio MCP server,
analyze_text+analyze_filetools, auto-checks AI text claims in-context - v0.7 β VS Code extension: analyze selection/file, inline diagnostics for docs/comments/markdown, status bar trust score
- v0.8 β CI/CD integration: GitHub Action, fail builds on low trust score, policy thresholds (
--min-score) - v0.9 β Browser extension: highlight claims in ChatGPT/Claude UI with inline trust indicators
- v1.0 β TruthLens Platform: unified trust layer across CLI, VS Code, MCP, and CI pipelines with policy enforcement and fully local execution
- v2.0 β Enterprise Trust System: policy engine, dashboard, audit & compliance reporting, enterprise API, team governance
- Zero API calls by default β every version works offline, locally, for free
- Formally verified β Lean 4 proofs for all scoring properties
- Hedging = trustworthy β a model that says "might" is more honest than one stating falsehoods with certainty
- Fast β microsecond analysis, no model inference required
Every existing hallucination detector either requires multiple LLM API calls (expensive, slow) or access to model logprobs (grey-box only). TruthLens works on any AI output with zero API calls β you paste text, you get a trust score. And the scoring properties are formally proven in Lean 4, which nobody else does.
Apache-2.0