Skip to content

docs: v0.4.1 positioning — Evidence as Code frame#172

Merged
avrabe merged 2 commits intomainfrom
docs/v041-positioning
Apr 22, 2026
Merged

docs: v0.4.1 positioning — Evidence as Code frame#172
avrabe merged 2 commits intomainfrom
docs/v041-positioning

Conversation

@avrabe
Copy link
Copy Markdown
Contributor

@avrabe avrabe commented Apr 22, 2026

Summary

  • New file docs/what-is-rivet.md — the canonical positioning doc (~2000 words). Frames rivet as the audit substrate for AI-assisted engineering ("Evidence as Code"), contrasts with Sphinx-Needs' "Engineering as Code", and gives a per-situation playbook for 11 use-cases (TDD, ASPICE, STPA/ISO 26262, requirements, variant/PLE, LLM code review, provenance, cross-tool interop, GSN, tool qualification, spec-driven dev). Each use-case is ~100 words with question / artifacts / AI role / human role / explicit limits.
  • Rewrite top of README.md — replaces the "SDLC traceability for safety-critical systems" + feature-list first paragraph with an evidence-centric pitch that points at docs/what-is-rivet.md for depth. The rest of the README (features, quick start, CLI table, architecture, dogfooding, dev) is untouched.
  • Explicit human vs AI split table — authoring, linking, validation, coverage, variant, provenance, compliance export, tool qualification, schemas.
  • Explicit limits section — no Polarion live editing, no direct ALM connector today, no iso-26262.yaml schema yet, no npm distribution yet. All in-flight items are marked "planned for v0.5.0".

All capability claims are verifiable against main HEAD:

  • 34 hazards + 62 UCAs + 62 controller constraints (safety/stpa/)
  • MCP tool set matches rivet-cli/src/mcp.rs (rivet_list/get/query/add/modify/link/unlink/remove/validate/coverage/stats/schema/embed/snapshot_capture/reload)
  • s-expr predicates forall / exists / reachable-from / reachable-to match rivet-core/src/sexpr_eval.rs
  • 27 Kani harnesses in rivet-core/src/proofs.rs
  • 324 Playwright tests across 28 spec files in tests/playwright/
  • Schemas referenced (iso-pas-8800, iec-61508, iec-62304, do-178c, en-50128, eu-ai-act, safety-case, aspice, stpa, stpa-sec) all exist in schemas/

Chosen one-sentence pitch: "rivet is the audit substrate for AI-assisted engineering — every artifact, link, and decision carries evidence a human can review in a pull request."

Alternates considered:

  • "rivet lets AI agents build your traceability chain; humans review the PR."
  • "Evidence as Code: rivet turns LLM-authored SDLC work into a git-native, schema-validated audit trail."

Trailer: Refs: FEAT-001.

Test plan

  • rivet validate still PASS (docs changes only; no schema/source changes)
  • Links inside docs/what-is-rivet.md resolve — especially design/polarion-reqif-fidelity.md, getting-started.md, architecture.md, schemas.md, stpa-sec.md, verification.md, roadmap.md, audit-report.md
  • README.md renders cleanly on GitHub (no broken reference badge or link)
  • The "planned for v0.5.0" markers are consistent between README, what-is-rivet, and the roadmap

🤖 Generated with Claude Code

avrabe and others added 2 commits April 22, 2026 00:27
Add docs/what-is-rivet.md as the canonical positioning doc:

- Frame rivet as the audit substrate for AI-assisted engineering
  (not "SDLC traceability"). Contrast with Sphinx-Needs'
  "Engineering as Code": rivet is "Evidence as Code" — AI-authored,
  provenance-stamped, machine-validated, human-reviewed in PRs.
- Document the per-situation playbook for 11 use-cases (TDD, ASPICE,
  STPA/ISO 26262, requirements, variant/PLE, LLM code review,
  provenance, cross-tool interop, GSN, tool qualification,
  spec-driven dev), each with question, artifacts, AI role, human
  role, and explicit limits.
- Make the human-vs-AI split explicit in a table.
- Document what rivet is NOT (no Polarion live editing, no direct
  ALM connector today, no iso-26262.yaml schema yet, no npm
  distribution yet — all marked "planned for v0.5.0").
- Quick-start with today's install path (cargo) plus the planned
  npm path for v0.5.0.

Rewrite README.md intro (top ~30 lines) to align with the frame —
replacing the feature-list first paragraph with an evidence-centric
one-paragraph pitch that points at docs/what-is-rivet.md for depth.

All capability claims are verifiable against main HEAD: 34 hazards +
62 UCAs + 62 controller constraints in safety/stpa/, the MCP tool
list (rivet_list/get/query/add/modify/link/unlink/remove/validate/
coverage/stats/schema/embed/snapshot_capture/reload) matches
rivet-cli/src/mcp.rs, and the s-expr predicates
(forall/exists/reachable-from/reachable-to) match
rivet-core/src/sexpr_eval.rs.

Refs: FEAT-001

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Rewrites docs/what-is-rivet.md and the README intro to match the
rivet-v0-1-0 blog post cadence: Problem → Answer → Evidence,
concrete numbers per section, no marketing vocabulary, three-sentence
README intro (problem / answer / concrete result).

All facts from 3b89365 preserved. Honesty flags for planned-for-v0.5.0
items kept. Use-case palette kept; human-vs-AI table tightened.

Refs: FEAT-001
@avrabe avrabe force-pushed the docs/v041-positioning branch from 7bd7377 to 7812ece Compare April 22, 2026 05:27
@avrabe avrabe merged commit a3f189d into main Apr 22, 2026
1 check passed
@avrabe avrabe deleted the docs/v041-positioning branch April 22, 2026 05:32
Copy link
Copy Markdown

@github-actions github-actions Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Performance Alert ⚠️

Possible performance regression was detected for benchmark 'Rivet Criterion Benchmarks'.
Benchmark result of this commit is worse than the previous benchmark result exceeding threshold 1.20.

Benchmark suite Current: 7812ece Previous: 61bfc41 Ratio
store_lookup/100 2243 ns/iter (± 8) 1677 ns/iter (± 3) 1.34
store_lookup/1000 24913 ns/iter (± 182) 19262 ns/iter (± 54) 1.29
traceability_matrix/1000 60215 ns/iter (± 161) 40570 ns/iter (± 154) 1.48
query/100 820 ns/iter (± 2) 646 ns/iter (± 2) 1.27
query/1000 7511 ns/iter (± 28) 5450 ns/iter (± 12) 1.38

This comment was automatically generated by workflow using github-action-benchmark.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant