You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On 2026-04-22, @shadcn replied to a helpbase demo thread with three questions.
The third was:
interesting to see if this can also be a "skills server". one of the hard
problems faced by the docs team is enforcing tone, writing styles...etc
think a skills server editable by the docs team which enforces writing
standards that can be pulled by other teams.
That framing crystallized something we'd been half-thinking. This is the RFC.
The problem, concretely
A docs team at a Series B startup has a voice guide. Maybe it's a Google Doc.
Maybe it's pinned in Slack. Maybe it's a README.
A week later, a product engineer writes release notes. They want them to
sound like the rest of the docs. They don't read the Google Doc. They wing
it. The release notes ship sounding different.
Multiply this across every team that writes public-facing text — support,
engineering, marketing, community. The voice guide lives; the voice
doesn't.
This is a distribution problem, not an authoring problem.
What helpbase@0.8.2 ships (v1 prototype)
Markdown files in .helpbase/skills/<name>.md in your repo become
agent-readable skills over MCP. Two new tools on the helpbase MCP server:
list_skills() → markdown list of names + descriptions
get_skill({ name }) → full content of one skill file
Any MCP client — Claude Desktop, Cursor, Zed, VS Code, a custom agent —
can pull a skill on demand. The docs team edits markdown in git. Other
teams pull the latest via MCP. The source of truth is main.
A skill file is deliberately unopinionated:
---description: Tone + voice for our product docs.---# Voice
Write like a builder talking to a builder. Not a consultant presenting
to a client.
## Rules- Lead with the point.
- Short paragraphs.
- Active voice.
- No AI vocabulary (no "delve", "crucial", "robust", "pivotal").
## Encouraged- Name specific files, functions, line numbers.
- Connect to user outcomes. Make the user's user real.
That's it. No schema, no validator, no dashboard. An LLM is already good
at following markdown prose. The structure is the content.
What helpbase does NOT do (and why)
No editor UI. Docs teams edit in their existing editor. git is the
review loop. PRs are the approval flow. CODEOWNERS is the permission
model. We don't need to build any of that.
No schema. I considered { name, description, scope, rules[], examples[], applies_to: glob[] }. It's over-engineered. Humans write
prose better than they fill in fields. LLMs read prose better than they
read field soup. Stay out of the way.
No enforcement gates. The skill is a reference, not a linter. If
an agent ignores the voice guide, that's an agent problem, not a skills
problem. (A linter on top of skills is interesting — separate RFC.)
No cross-team/cross-repo federation in v1. "Skills pulled by other
teams" is the pull side. If your team's skills live in their own repo, list_skills shows them. If another team's agents want to pull from
your repo, they install your helpbase-mcp there. Cross-repo federation
(one skills server surfacing multiple repos' skills) is a v2 thing that
needs real usage data first.
The engineer drops the voice guide into their prompt context. The agent
writes release notes that sound like the rest of the docs.
No dashboard. No SaaS. No config. The docs team's Google Doc got
replaced by a markdown file whose content is one git blame away from
the author who wrote it.
Open questions I want feedback on
Is "skill" the right noun? "Skill" in AI-agent land often means
"a function the agent can call" (e.g. Claude's skill system). What we
ship is closer to a rule, a doctrine, or a guide. Naming it
"skill" borrows Shadcn's framing but could confuse. Alternatives: rules, guides, conventions, house-style. No strong opinion
yet. Current name wins by user-familiarity.
Multiple skills per file, or one per file? v1 is one-per-file
because filenames make great agent-visible identifiers. But a docs
team might want writing.md with # Voice, # Formatting, # Terminology as sections. Both models have merit — the current
shape encourages small orthogonal skills, which may be the right
forcing function.
Should skills have applies_to scope? E.g. "only pull this
voice guide when writing release notes, not API reference." Agents
can't reason about scope without being told. Could encode in
frontmatter: applies_to: [release-notes, announcements]. Useful
but adds ceremony. Probably wait for a real user to ask.
Federation across repos. Already noted. Shadcn's framing
("pulled by other teams") implies cross-repo. v2 territory.
Versioning. A skill changes meaning over time. Do we ship get_skill({ name, at: "v1.2" }) to pin? git history already lets
humans git log, but agents would need an MCP-level API. Not in v1.
What a v2 might look like
If this v1 resonates, the obvious extensions:
Applies_to scope (per Q3 above).
Federation:list_skills({ from: "https://acme.com/skills.json" })
pulls a published skills bundle from another repo over HTTP.
Enforcement linter:helpbase lint <file> runs the skill rules
against a draft, flags drift. LLM-judge style, not regex.
Semantic pull:get_skill({ query: "how do we write about errors?" }) semantic-searches skill contents and returns the most
relevant one.
IDE surface: VS Code extension that auto-injects the applicable
skill into any AI chat panel in the editor.
Try it
# Into an existing Next.js + shadcn project:
npx shadcn@latest add https://helpbase.dev/r/helpbase-mcp.json
# Configure Claude Desktop (or any MCP client) to run the server.# See the README that `shadcn add` drops into your repo.
mkdir -p .helpbase/skills
# Drop markdown files. Your docs team's voice guide, your API reference# style guide, your terminology list. Whatever.# Agents that connect to the MCP server now see list_skills + get_skill.
Feedback
Drop comments on this Discussion. Especially interested in hearing from:
Docs teams who've tried to solve this another way
People running many agents over their own product
Anyone who thinks the naming / scope / federation calls above are wrong
Thanks to @shadcn for the framing that made this specific.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
The ask that prompted this
On 2026-04-22, @shadcn replied to a helpbase demo thread with three questions.
The third was:
That framing crystallized something we'd been half-thinking. This is the RFC.
The problem, concretely
A docs team at a Series B startup has a voice guide. Maybe it's a Google Doc.
Maybe it's pinned in Slack. Maybe it's a README.
A week later, a product engineer writes release notes. They want them to
sound like the rest of the docs. They don't read the Google Doc. They wing
it. The release notes ship sounding different.
Multiply this across every team that writes public-facing text — support,
engineering, marketing, community. The voice guide lives; the voice
doesn't.
This is a distribution problem, not an authoring problem.
What helpbase@0.8.2 ships (v1 prototype)
Markdown files in
.helpbase/skills/<name>.mdin your repo becomeagent-readable skills over MCP. Two new tools on the helpbase MCP server:
Any MCP client — Claude Desktop, Cursor, Zed, VS Code, a custom agent —
can pull a skill on demand. The docs team edits markdown in git. Other
teams pull the latest via MCP. The source of truth is
main.A skill file is deliberately unopinionated:
That's it. No schema, no validator, no dashboard. An LLM is already good
at following markdown prose. The structure is the content.
What helpbase does NOT do (and why)
No editor UI. Docs teams edit in their existing editor. git is the
review loop. PRs are the approval flow. CODEOWNERS is the permission
model. We don't need to build any of that.
No schema. I considered
{ name, description, scope, rules[], examples[], applies_to: glob[] }. It's over-engineered. Humans writeprose better than they fill in fields. LLMs read prose better than they
read field soup. Stay out of the way.
No enforcement gates. The skill is a reference, not a linter. If
an agent ignores the voice guide, that's an agent problem, not a skills
problem. (A linter on top of skills is interesting — separate RFC.)
No cross-team/cross-repo federation in v1. "Skills pulled by other
teams" is the pull side. If your team's skills live in their own repo,
list_skillsshows them. If another team's agents want to pull fromyour repo, they install your helpbase-mcp there. Cross-repo federation
(one skills server surfacing multiple repos' skills) is a v2 thing that
needs real usage data first.
How a team would actually adopt this
Day 0:
Day 1 — engineer writing a release note in Claude Desktop:
The engineer drops the voice guide into their prompt context. The agent
writes release notes that sound like the rest of the docs.
No dashboard. No SaaS. No config. The docs team's Google Doc got
replaced by a markdown file whose content is one
git blameaway fromthe author who wrote it.
Open questions I want feedback on
Is "skill" the right noun? "Skill" in AI-agent land often means
"a function the agent can call" (e.g. Claude's skill system). What we
ship is closer to a rule, a doctrine, or a guide. Naming it
"skill" borrows Shadcn's framing but could confuse. Alternatives:
rules,guides,conventions,house-style. No strong opinionyet. Current name wins by user-familiarity.
Multiple skills per file, or one per file? v1 is one-per-file
because filenames make great agent-visible identifiers. But a docs
team might want
writing.mdwith# Voice,# Formatting,# Terminologyas sections. Both models have merit — the currentshape encourages small orthogonal skills, which may be the right
forcing function.
Should skills have
applies_toscope? E.g. "only pull thisvoice guide when writing release notes, not API reference." Agents
can't reason about scope without being told. Could encode in
frontmatter:
applies_to: [release-notes, announcements]. Usefulbut adds ceremony. Probably wait for a real user to ask.
Federation across repos. Already noted. Shadcn's framing
("pulled by other teams") implies cross-repo. v2 territory.
Versioning. A skill changes meaning over time. Do we ship
get_skill({ name, at: "v1.2" })to pin? git history already letshumans
git log, but agents would need an MCP-level API. Not in v1.What a v2 might look like
If this v1 resonates, the obvious extensions:
list_skills({ from: "https://acme.com/skills.json" })pulls a published skills bundle from another repo over HTTP.
helpbase lint <file>runs the skill rulesagainst a draft, flags drift. LLM-judge style, not regex.
get_skill({ query: "how do we write about errors?" })semantic-searches skill contents and returns the mostrelevant one.
skill into any AI chat panel in the editor.
Try it
Feedback
Drop comments on this Discussion. Especially interested in hearing from:
Thanks to @shadcn for the framing that made this specific.
Beta Was this translation helpful? Give feedback.
All reactions