Skip to content

πŸ§ͺ Hostile fresh-user auditΒ #237

@hurttlocker

Description

@hurttlocker

Problem

We built Cortex. We know its quirks. We forgive its rough edges because we understand the intent. A stranger won't.

Before tagging v1.0, the product needs to survive a hostile fresh-user test β€” someone who has never seen Cortex tries to install and use it with zero hand-holding.

Protocol

Test Environment

  • Fresh machine (or fresh user account on existing machine)
  • No prior Cortex installation
  • No ~/.cortex/ directory
  • No Ollama running (unless they install it themselves)
  • No OpenRouter key configured
  • Standard dev tools only (git, curl, brew)

Test Script

The tester follows ONLY the README and published docs. They are instructed to:

  1. Install Cortex using whichever method they prefer from README
  2. Import some files (their own notes, or a sample corpus we provide)
  3. Search for something they just imported
  4. Try the graph explorer (cortex graph --serve)
  5. Connect it to an MCP client (Claude Code or Cursor)
  6. Set up a connector (GitHub β€” most devs have this)
  7. Run cortex stats and interpret the output
  8. Run cortex stale and understand what it means
  9. Break it β€” try wrong flags, missing files, weird input

What We're Looking For

  • Confusion points: Where did they get stuck? What wasn't obvious?
  • Error quality: When they hit errors, were the messages helpful?
  • Time to value: How long from install to first useful search result?
  • Unmet expectations: What did they expect to work that didn't?
  • Documentation gaps: What question did they have that docs didn't answer?

Candidate Testers

  • SB (non-technical, uses Noemie daily β€” tests the "personal agent user" path)
  • A developer friend (tests the "I want memory for my AI agent" path)
  • Niot (agent β€” tests the "agent installs and uses Cortex autonomously" path)

Scoring

Category Target
Install to first search <5 minutes
Install to MCP working <10 minutes
Errors encountered <3 (and all have clear remediation)
Docs consultations README + getting-started should cover 90%
"I give up" moments 0

Deliverable

A written audit report with:

  • Every friction point documented
  • Severity rating (blocking / annoying / cosmetic)
  • Fix recommendation for each
  • Updated time-to-value measurement

Acceptance Criteria

  • At least 2 hostile tests completed (1 dev, 1 non-dev)
  • Audit report written and filed
  • All "blocking" friction points fixed before v1.0 tag
  • All "annoying" friction points fixed or documented as known issues
  • Time-to-first-search is under 5 minutes

Metadata

Metadata

Assignees

No one assigned

    Labels

    auditFound by external auditdxDeveloper experience & onboardingv1.0v1.0 β€” Production Ready

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions