An evidence-first knowledge app that replaces pages with checkable claims linked to primary sources. See confidence scores, supporting/refuting evidence, and timelines. Reviewers verify via checklists; every edit is transparent and auditable. Bias-resistant, calibrated, and API-ready.
Traditional wikis collapse disagreements into a single narrative. Truthmesh is built around claims and evidence, so readers can see what’s known, how confidently, and why—without burying disputes on talk pages.
- Claim: Atomic, checkable sentence with structured scope (time/place/metric).
- Evidence: Citations with exact quotes/locators and an evidence tier (A/B/C).
- Confidence: Calibrated probability derived from evidence strength + reviewer votes.
- Review: Human checklist + rationale; transparent provenance and COI.
- Claim cards with confidence dials and side‑by‑side supporting/refuting evidence
- Evidence ladder (Tier A/B/C) + independence grouping
- Timeline view per entity/topic
- Reviewer console with quote highlighting & hard checks (locator, hash, COI)
- Append‑only edit log and nightly NDJSON exports
- REST API and OpenAPI docs
Default MVP stack (adjust to your repo):
- API: FastAPI (Python 3.11)
- DB: PostgreSQL 15
- Workers: Celery + Redis (source fetch, hashing, duplicate detection)
- Frontend: Next.js/React (optional for MVP)
- Object Storage: S3‑compatible (for snapshots/archives)
- Auth: Session/JWT (pluggable)
Repo layout:
/api # FastAPI app
/workers # Celery tasks
/web # Next.js frontend (optional)
/migrations # Alembic
/scripts # CLI utilities (seed/export)
/docker # Compose files
# 1) Copy env
cp .env.example .env
# 2) Start services
docker compose up -d --build
# 3) Run migrations & seed sample data
docker compose exec api alembic upgrade head
docker compose exec api python scripts/seed_pilot.py
# 4) Open
# API: http://localhost:8000
# Docs: http://localhost:8000/docs
# Web: http://localhost:3000 (if web is enabled)
Prereqs: Python 3.11, Node 20, Postgres 15, Redis.
# Backend
python -m venv .venv && source .venv/bin/activate
pip install -r api/requirements.txt
export $(cat .env | xargs) # load env vars
alembic upgrade head
uvicorn api.main:app --reload --port 8000
# Workers (in another shell)
celery -A workers.app worker -l info
# Frontend (optional)
cd web && npm install && npm run dev
Create .env
from .env.example
.
DATABASE_URL=postgresql+psycopg://user:pass@localhost:5432/truthmesh
REDIS_URL=redis://localhost:6379/0
SECRET_KEY=change-me
S3_ENDPOINT=http://localhost:9000
S3_BUCKET=truthmesh
S3_ACCESS_KEY=...
S3_SECRET_KEY=...
- Alembic manages schema (see
migrations/
). - Key tables:
entities
,sources
,claims
,claim_evidence
,reviews
,edits
,users
,calibration_events
. - Run:
alembic revision -m "add_claims" && alembic upgrade head
Common tasks:
# Lint & type-check
ruff check api && mypy api
# Run tests
pytest -q
# Export nightly snapshot (NDJSON)
python scripts/export_snapshot.py --out exports/$(date +%F).ndjson
# Re-hash archived sources
python scripts/rebuild_hashes.py
OpenAPI: /docs
or /openapi.json
Example: create a claim
curl -X POST http://localhost:8000/claims \
-H 'Content-Type: application/json' \
-d '{
"entity_id": "e_company_42",
"text": "Acme Corp reported revenue of $12.4B in FY2023.",
"claim_type": "numeric",
"scope": {"metric":"revenue","value":12400000000,"unit":"USD","period":"FY2023"},
"evidence": [{
"source_id":"s_10k_2024",
"relation":"supports",
"quote":"Net sales were $12.4 billion in fiscal 2023",
"locator":{"section":"Item 7","page":44},
"tier":"A"
}]
}'
Fetch a claim
curl http://localhost:8000/claims/c_123
- Unit tests:
pytest
- Style:
ruff
,black
- Types:
mypy
- Security:
pip-audit
/npm audit
- Nightly NDJSON of claims/evidence/scores in
/exports
. - Public scoring notebook can recompute confidence from exports.
- Two reviewers minimum on sensitive claims.
- Checklist enforced in reviewer console (locator, tier, COI, rationale).
- COI required on reviewer profiles; divergence triggers re‑review.
- Append‑only edit log; signed diffs optional.
- Legal takedown lane (
451
) and fast corrections for living persons. - Brigading heuristics and rate‑limits for sensitive topics.
- Confidence calibration dashboards (Brier & reliability diagrams)
- PDF table extractors → time‑series with lineage
- Independence detection improvements
- Domain expansion (clinical trials, legislation, macro series)
PRs welcome! See CONTRIBUTING.md
for setup, coding standards, and review flow. Please run tests and linters before submitting.
TBD
Naming: The project name is Truthmesh. Update package names and Docker image tags as needed if your org uses a different naming convention.