Skip to content

docs: AI-evidence trend research#173

Merged
avrabe merged 1 commit intomainfrom
docs/ai-evidence-trend-research
Apr 22, 2026
Merged

docs: AI-evidence trend research#173
avrabe merged 1 commit intomainfrom
docs/ai-evidence-trend-research

Conversation

@avrabe
Copy link
Copy Markdown
Contributor

@avrabe avrabe commented Apr 22, 2026

Summary

Internal strategy doc for the rivet product lead answering: is "AI agents generating work-product evidence under human review" a real trend, and where is it going?

  • Verdict: emerging category, not yet a coalesced market. Rivet is one of only two tools identified (the other is useblocks' pharaoh, 5 stars) that combine AI-native generation, a human-review loop, and a structured engineering evidence unit.
  • Field map of 20+ adjacent tools (SLSA, sigstore, CycloneDX ML-BOM, SPDX 3 AI profile, MCP, AGENTS.md, SpecStory, Aider, Continue.dev, spec-kit, Kiro, Langfuse / LangSmith / Weave / Phoenix, promptfoo, Polarion / Jama / DOORS / strictDoc / sphinx-needs) with direct marketing quotes where verifiable.
  • Regulatory tailwind ranked: EU AI Act Art. 12 > ISO/IEC 42001 > ASPICE / ISO 26262 / DO-178C updates > NIST AI RMF > FDA SaMD.
  • Five predictions (labelled speculation) to test in 12 months, plus a strategic recommendation: own "evidence, not telemetry"; sell to the safety-critical auditor; interop with incumbents instead of competing.

Refs: FEAT-001

Test plan

  • Product lead reads doc and confirms framing matches positioning intent
  • Re-verify the claims flagged in the Appendix under "Unverified this pass" (EU AI Act Art. 12 exact text, ISO/IEC 42001 clauses, NIST AI RMF, Langfuse / LangSmith / Polarion / Jama marketing) before any external use
  • Re-source the "60k+ AGENTS.md projects" figure from arxiv:2604.13108; the paper as fetched did not surface that number

Constraints: under 2500 words (came in at ~2400), every external claim has a URL where verifiable, unverifiable claims explicitly flagged.

Generated with Claude Code.

Internal strategy doc analyzing whether "AI agents generating
work-product evidence under human review" is a real industry
trend. Field map of 20+ adjacent tools, regulatory tailwind
analysis (EU AI Act, ISO/IEC 42001, ASPICE/ISO 26262), and
five labelled predictions to test in 12 months.

Verdict: emerging category, not yet a coalesced market. Rivet's
evidence-unit framing (schema-validated YAML artifacts + AI
provenance stamp + human-review validation) is currently only
shared by useblocks' pharaoh (5 stars), leaving 12-18 months
of defensible lane around safety-critical SDLCs.

Refs: FEAT-001

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@avrabe avrabe merged commit a24c1b9 into main Apr 22, 2026
1 check passed
@avrabe avrabe deleted the docs/ai-evidence-trend-research branch April 22, 2026 05:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant