Skip to content

docs(plans): GEO/SEO strategy design doc + 5 phase files#2963

Merged
matheuspoleza merged 5 commits intomainfrom
plan/geo-seo-strategy
Apr 22, 2026
Merged

docs(plans): GEO/SEO strategy design doc + 5 phase files#2963
matheuspoleza merged 5 commits intomainfrom
plan/geo-seo-strategy

Conversation

@matheuspoleza
Copy link
Copy Markdown
Contributor

Summary

Adds a complete design doc + 5 self-contained phase files for making Vertz discoverable to LLMs and driving organic traffic via GEO (generative engine optimization) + SEO.

The core insight: LLMs in 2026 (ChatGPT search, Claude web, Perplexity, Gemini) cite whatever the top 3 web search results say. Ranking #1 for a long-tail query = instant LLM citation. We do not need to wait for the next training cutoff.

What's in this PR

Main design doc (plans/geo-seo-strategy.md, 461 lines):

  • Problem statement, goals (G1-G8), non-goals
  • Manifesto alignment (8 principles → how each is honored by the strategy)
  • 3-layer architecture: foundation → authority → autonomous engine
  • Only 4 content formats allowed (comparison, gotcha, tutorial, opinion)
  • Autonomous pipeline design (8 agents, 2 human gates, ~10 min/post human time)
  • Success metrics with concrete 6-week acceptance thresholds
  • 5 E2E acceptance scenarios
  • 9 mapped risks with mitigations
  • 6 open questions for reviewer decisions (budget, MCP scope, launch order, etc.)

5 phase files (plans/geo-seo-strategy/phase-NN-*.md):

  1. Phase 1 – Foundation Infrastructure (7d): @vertz/docs-mcp public MCP server, llms-full.txt, expanded JSON-LD (TechArticle/SoftwareApplication/FAQPage), IndexNow + Google Indexing API, PostHog+Plausible analytics, citation tracker baseline
  2. Phase 2 – Ignition Content (10d, parallel): flagship LLM benchmark post with reproducible benchmarks/llm-codegen/, 3 vs-X comparisons, 30 long-tail landing pages, README rewrite, awesome-list submissions
  3. Phase 3 – Distribution Blitz (7d): HN launch, Product Hunt, Reddit (4 subs, 4 angles), newsletter sponsorships (3 budget scenarios), influencer outreach, Stack Overflow seeding, GitHub Discussions
  4. Phase 4 – Autonomous Pipeline (11d): 7-agent pipeline on @vertz/agents — topic picker, writer, code validator (runs every snippet through vtz test), adversarial reviewer, publisher, Slack approval UI, optional Semrush signal
  5. Phase 5 – Measurement & Iteration (ongoing): daily citation tracker cron, leading-indicator dashboard, weekly retros, pipeline eval loop against golden set, referrer attribution model, rolling 90-day roadmap

Why submit this as a PR now

  • Let's review the strategy with the same rigor we apply to code features
  • Adversarial review is scheduled in parallel (multiple angles: strategic, technical feasibility, manifesto alignment)
  • The 6 open questions need decisions before Phase 3 kicks off
  • Merging means the plan is a working document we can iterate on; keeping it on a branch invites it to go stale

Not in scope for this PR

  • Implementation of any phase (phases will be their own PRs / branches, per .claude/rules/local-phase-workflow.md)
  • Creation of the GitHub issue (per .claude/rules/ticket-system.md — will follow after design approval)
  • Budget commitments (one of the 6 open questions)

Test plan

  • Review plans/geo-seo-strategy.md top to bottom
  • Confirm manifesto alignment claims are honest (no forced fits)
  • Challenge the 6 open questions — decide before Phase 3
  • Verify phase files are actually self-contained (another agent could pick one up cold)
  • Check that acceptance criteria are concrete and testable (not "it works")
  • Pressure-test the timeline — is 21-28 days realistic?

🤖 Generated with Claude Code

matheuspoleza and others added 2 commits April 22, 2026 12:45
Introduces a complete strategy for indexing Vertz to LLMs and driving
organic traffic via GEO (generative engine optimization) + SEO.

Main design doc (plans/geo-seo-strategy.md, 461 lines):
- Problem framing: LLMs don't know Vertz; we need retrieval-first wins
  (days) before training-crawl wins (months)
- 3-layer strategy: foundation/retrieval -> authority/distribution ->
  autonomous engine
- Manifesto alignment table (8 principles -> how each is honored)
- Research summary: how LLMs cite in 2026 (ChatGPT search, Claude web,
  Perplexity, Gemini all use live retrieval)
- Content formats (only 4: comparison, gotcha, tutorial, opinion)
- Distribution channels with DA/reach table
- Autonomous pipeline architecture (8 agents, 2 human gates)
- Success metrics with 6-week acceptance thresholds
- 5 E2E acceptance scenarios
- 9 mapped risks with mitigations and owner
- 6 open questions for review

Phase breakdown (self-contained phase files):
- Phase 1 (7d): foundation infra - MCP server, llms-full.txt, JSON-LD
  expansion, IndexNow, analytics, citation tracker baseline
- Phase 2 (10d, parallel): ignition content - flagship benchmark post,
  3 comparisons, 30 long-tail landing pages, README rewrite
- Phase 3 (7d): distribution blitz - HN, PH, Reddit, sponsorships,
  influencer outreach, SO seeding, GitHub Discussions
- Phase 4 (11d): autonomous pipeline - 7 agents on @vertz/agents,
  Slack approval gates, end-to-end content flow
- Phase 5 (ongoing): measurement + iteration - daily citation tracker,
  leading-indicator dashboard, weekly retros, pipeline eval loop,
  attribution model, rolling 90-day roadmap

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
3 parallel reviewers with different lenses:
- Strategic (skeptical investor) — 5 blockers, 11 should-fix, 8 questions
- Technical feasibility (principal engineer) — timeline 8-10w vs 28d,
  27 under-specified tasks, 13 missing infra deps
- Manifesto/positioning (most-principled engineer) — 5 manifesto
  violations incl. SO sockpuppeting + AI-byline identity laundering

Consolidation (00-consolidation.md) ranks findings by reviewer
agreement (3/3 > 2/3 > unique), surfaces 5 top blockers, lists 10
open decisions Matheus needs to answer before execution.

Not committed to main per .claude/rules/local-phase-workflow.md
(reviews/ is working-artifact layer, deleted before merge to main).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@matheuspoleza
Copy link
Copy Markdown
Contributor Author

Adversarial review — 3 reviewers, consolidated

Full write-ups in reviews/geo-seo-strategy/:

TL;DR verdict

Do not execute as written. Strategic foundation is sound (3/3 reviewers agree on layered architecture, MCP server, citation tracker, 4-format constraint). Execution has 3 compounding defects + 2 manifesto violations that need fixing first.

5 top blockers (ranked by reviewer agreement)

# Blocker Confidence
B1 Hidden phase-zero: plan references files only on unmerged feat/2947-blog branch; merging #2947 is not in any phase 1/3 (technical)
B2 Benchmark methodology adversarially fragile — n=5, 1 LLM only, "if competitors ≥80% revise methodology" = p-hacking 3/3
B3 Timeline 50–60% optimistic — realistic 8–10 weeks for full acceptance, not 28 days 2/3
B4 SO secondary-account seeding = sockpuppeting — violates SO ToS + manifesto ethics 2/3
B5 No strategy-level kill-switch — every result justifies continuation; name a week-6 threshold that STOPS 2/3

Manifesto violations (must-fix before Phase 4)

  • M1. Writer agent impersonating Matheus's first-person voice = identity laundering. Needs visible author: autonomous-pipeline (reviewed by Matheus) byline, or restrict pipeline to /answer/* surface.
  • M2. "Distribution Blitz" / "weaponize" — hype language in the plan itself, same tone writer prompt bans in output.
  • M3. "One way to do things" table claim is violated by 11-channel × 4-angle distribution matrix. Be honest: distribution is the one place ambiguity is accepted.

10 open decisions Matheus needs to answer

Full list in 00-consolidation.md. Most critical:

  1. Who (external) reviews benchmark methodology pre-run?
  2. If benchmark shows Vertz 65% / Next.js 70%, do we publish? (Pre-commit)
  3. AI-authorship disclosure — yes or no?
  4. SO seeding — remove secondary-account pattern? (Recommended: yes)
  5. Timeline: accept 8–10w for full scope, or keep 28d as scope-cut v1?
  6. When does feat(landing): blog — dogfood Vertz stack at /blog #2947 merge?
  7. What week-6 metric threshold STOPS the strategy? Name it now.

What's genuinely strong (3/3 reviewer agreement)

  • MCP docs server (manifesto-aligned, highest integrity, largest leverage)
  • Citation tracker as day-1 deliverable (measurement discipline is rare and correct)
  • Layered architecture (foundation → authority → engine)
  • 4-format content constraint
  • "If Vertz <60%, don't publish" acceptance criterion (diagnostic, not marketing)
  • Code validator (dogfood of Principle 1 applied to content)

Recommended next moves

  1. Don't merge yet. Address 5 blockers as new commits on this branch.
  2. Answer the 10 open decisions.
  3. Run one more adversarial review on the revised plan.
  4. Then merge + open phase issues.
  5. Execute Phase 0 (merge feat(landing): blog — dogfood Vertz stack at /blog #2947) + scope-cut Phase 1 in first 14 days. Reassess based on measured pace.

matheuspoleza and others added 2 commits April 22, 2026 13:14
Addresses all 5 blockers from the 3-agent adversarial review
(reviews/geo-seo-strategy/00-consolidation.md):

B1 - Phase 0 added: merge #2947 before Phase 1 starts
(new plans/geo-seo-strategy/phase-00-prerequisites.md)

B2 - Benchmark rewritten as "radical transparency case study"
(phase-02-ignition-content.md Task 1):
- Removed statistical benchmark claim (89% vs 34%)
- Replaced with qualitative case study + full Claude transcripts
- n=5 per cell, 2 frameworks (not 4), 1 LLM, no stats claims
- Total budget ~$80 one-time (fits $200/mo cap)
- Mandatory public preregistration 5+ days before run
- External review by non-Vertz engineer required
- Removed p-hacking criterion ("if competitors >=80% revise methodology")
- Post title contains no statistic; explicit "limitations" section
- Must publish regardless of outcome

B3 - Timeline restructured as v1 (28 days, scope-cut) + post-v1 (weeks 5-10):
- v1 keeps only Phase 0 + 4 tasks from Phase 1 + 1 task from Phase 2 +
  1 task from Phase 3 + 1 task from Phase 5
- Phase 4 (autonomous pipeline) entirely deferred from v1
- Phase 3 "Blitz" renamed to "Launch" (M2 - hype language)
- Acceptance thresholds split into v1 (4wk) and full scope (10wk)
- All post-v1 targets relaxed (top-10 not top-3, 300 stars not 1k,
  20 signups not 50) per reviewer's DA-0 reality check

B4 - Stack Overflow secondary-account seeding removed entirely
(phase-03-distribution-blitz.md Task 5):
- No secondary-account Q&A seeding (was sockpuppeting)
- If [vertz] tag doesn't emerge organically in 60 days, accept as signal

B5 - Stop Conditions section added to main doc:
- Stop 1: 0 citations for 4 consecutive weeks -> pause content
- Stop 2: <10 LLM-referrer sessions/week sustained to week 8
- Stop 3: >=50% pipeline rejection 2 weeks (only if Phase 4 ships)
- Stop 4 (nuclear): 0 qualified leads in 12 weeks despite green 1+2

Manifesto fixes per positioning reviewer:
- M1 (AI-author identity laundering): Phase 4 now requires visible
  "author: autonomous-pipeline (reviewed by Matheus)" byline + drops
  "Persona: Matheus Poleza" from writer prompt
- M2 (hype language): Phase 3 "Blitz" -> "Launch", context rewritten
- M3 ("one way" violation): Manifesto Alignment table now honest about
  distribution-ambiguity trade-off instead of denying it

Budget section added: $200/month hard cap, no newsletter sponsorships,
citation tracker weekly not daily (~$20/mo), case-study one-time ~$80.

Resolved 6 of 6 open questions from v1; added 3 new open questions
for Phase 2 kickoff (task count, external reviewer pick, stop
condition calibration).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Final review identified 3 remaining blockers + 5 should-fix items.
All addressed:

BLOCKERS:
1. M1 body contradiction (phase-04 line 105): writer prompt said
   "Persona: Matheus Poleza, first-person voice" despite the mandatory
   M1 revision on line 13 requiring neutral technical voice. FIXED:
   prompt now says "informed-but-neutral technical writer"; first-person
   only when quoting a named human; added militaristic-word ban list.

2. Stop Condition 2 depended on Phase 1 Task 4 analytics (deferred).
   FIXED: Stop 2 now notes prerequisite + provides Cloudflare Worker
   referer-log fallback as coarse v1 proxy.

3. Phase files not updated to reflect v1 scope-cut. FIXED: DEFERRED
   banners added to phase-01 Task 4, phase-02 Tasks 2-5, phase-03
   Tasks 2-5, phase-05 Tasks 2-6. Only v1-scope tasks remain without
   banners.

SHOULD-FIX:
4. "Blitz" residue. FIXED: git mv phase-03-distribution-blitz.md ->
   phase-03-distribution-launch.md; "Reddit blitz" -> "Reddit rollout"
   throughout body; reddit-blitz-plan.md -> reddit-rollout-plan.md;
   main doc table updated.

5. Phase 2 arithmetic error in done-when: "total 7" with duplicated
   "gotcha + tutorial". FIXED to 6 posts (case-study + 3 comparisons
   + gotcha + tutorial), removed phantom "1 opinion if time".

6. Stale ASCII layer diagram still mentioned sponsors + PH in Layer 2.
   FIXED: diagram annotates v1 vs post-v1 content per layer.

7. North Star "My LLM nailed it on the first try" vs new "no headline
   statistics" rule. FIXED: reframed as internal aspiration only; all
   external communication uses comparative+probabilistic framing or
   the "judge transcripts yourself" hook.

8. No decision dates on 3 open questions. FIXED: explicit dates +
   defaults for case-study scope, external reviewer selection, stop
   condition calibration.

After this commit the plan should pass another adversarial review on
all original blockers + manifesto issues. One more review is scheduled.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@matheuspoleza
Copy link
Copy Markdown
Contributor Author

Round-2 revisions pushed

All 3 remaining blockers + 5 should-fix items from the final adversarial review are fixed in commit 4f72a72bc.

Blockers

  • M1 body contradiction (phase-04 line 105): writer prompt now says "informed-but-neutral technical writer." Added militaristic-word ban. No more "Persona: Matheus" impersonation.
  • Stop Condition 2 depended on deferred analytics: now notes prerequisite + Cloudflare Worker referer-log fallback as v1 proxy.
  • Phase files misaligned with v1 scope: DEFERRED banners added to every task not in v1 scope. phase-01 Task 4, phase-02 Tasks 2-5, phase-03 Tasks 2-5, phase-05 Tasks 2-6.

Should-fix

  • Renamed phase-03-distribution-blitz.mdphase-03-distribution-launch.md via git mv. "Reddit blitz" → "Reddit rollout" throughout.
  • Fixed Phase 2 done-when arithmetic (was "total 7" with duplicated entries; now correctly 6 posts).
  • ASCII layer diagram annotates v1 vs post-v1 content per layer.
  • North Star reframed: "nailed it on the first try" is now explicitly internal-only; external comms use comparative+probabilistic framing or "judge transcripts yourself."
  • Explicit decision dates added to 3 open questions (case-study task count, external reviewer, stop calibration).

Summary of plan status

All 5 original blockers (B1–B5) verified resolved by last review.
All 3 manifesto issues (M1–M3) now resolved in this round.
All 5 new issues from round-2 review resolved.

Ready for code review from Matheus. If another adversarial pass is desired, I can spawn one — otherwise the plan is merge-ready pending human review.

Per .claude/rules/local-phase-workflow.md:
- reviews/ is a working-artifact directory
- Not committed to main, deleted before merge
- Review summary preserved in PR #2963 comment thread

The full adversarial review content (3 reviewers + consolidation)
is preserved in the PR comment history at:
#2963 (comment)

Historical record in git: reachable via commit be1aa28 on this
branch before deletion (will be lost when branch is deleted post-merge
unless explicitly archived).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
@matheuspoleza matheuspoleza merged commit 666219e into main Apr 22, 2026
7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant